This document summarizes a research paper that proposes an approach for automatically authenticating handwritten documents based on low density pixel measurements. The approach focuses on analyzing subtle spatial features related to pen pressure, like the percentage of low density pixels in a signature, which could help distinguish genuine signatures from skilled forgeries that appear very similar. 10 features are extracted, including the low and high density pixel percentages and their ratio. An adaptive threshold is also introduced to make verification judgments. The method is tested on a dataset of 200 genuine and 200 forged signature images. Results show it can effectively detect skilled forgeries that current static feature-based methods struggle with.
Offline Handwritten Signature Identification and Verification using Multi-Res...CSCJournals
In this paper, we are proposing a new method for offline (static) handwritten signature identification and verification based on Gabor wavelet transform. The whole idea is offering a simple and robust method for extracting features based on Gabor Wavelet which the dependency of the method to the nationality of signer has been reduced to its minimal. After pre-processing stage that contains noise reduction and signature image normalisation by size and rotation, a virtual grid is placed on the signature image. Gabor wavelet coefficients with different frequencies and directions are computed on each points of this grid and then fed into a classifier. The shortest weighted distance has been used as the classifier. The weight that is used as the coefficient for computing the shortest distance is based on the distribution of instances in each of signature classes. As it was pointed out earlier, one of the advantages of this system is its capability of signature identification and verification of different nationalities; thus it has been tested on four signature dataset with different nationalities including Iranian, Turkish, South African and Spanish signatures. Experimental results and the comparison of the proposed system with other systems are consistent with desirable outcomes. Despite the use of the simplest method of classification i.e. the nearest neighbour, the proposed algorithm in comparison with other algorithms has very good capabilities. Comparing the results of our system with the accuracy of human\'s identification and verification, it shows that human identification is more accurate but our proposed system has a lower error rate in verification.
The state of the art in handwriting synthesisRanda Elanwar
In this paper we present a literature review about the recent trends in handwriting synthesis, highlighting the different generation processes and pointing out the challenges facing the researchers. Finally we are giving a conclusion about the scientific research collections presented, and summarizing our opinions to help move future work up to maturity
The document discusses a new approach for identifying the script of words in low-resolution images of display boards using texture features. It aims to identify 3 Indian scripts: Hindi, Kannada, and English. The proposed method extracts discrete cosine transform-based texture features from word images and uses a threshold-based function to classify the script. When evaluated on 800 word images, it achieved an overall accuracy of 85.44% and individual accuracies of 100% for Hindi, 70.33% for Kannada, and 86% for English. The method is robust to variations in fonts, character spacing, noise and other degradations.
The document discusses optical character recognition for Urdu handwriting. It introduces OCR and its applications. It then discusses earlier work on OCR systems that were font-specific. The document outlines the steps in OCR including image acquisition, preprocessing, segmentation, feature extraction, classification, and recognition. It provides an overview of the Urdu script and its variations. The document then summarizes research conducted on recognizing offline isolated Urdu characters using moment invariants and support vector machines. Other works discussed include an online and offline OCR system for Urdu using a segmentation-free approach, classifying Urdu ligatures using convolutional neural networks, and a segmentation-based approach for Urdu Nastaliq script recognition.
SPICE MODEL of C4D15120A LTspice Model (Professional Model) in SPICE PARKTsuyoshi Horigome
SPICE MODEL of C4D15120A LTspice Model (Professional Model) in SPICE PARK. English Version is http://www.spicepark.net. Japanese Version is http://www.spicepark.com by Bee Technologies.
FREE SPICE MODEL of 1SS272 in SPICE PARK. English Version is http://www.spicepark.net. Japanese Version is http://www.spicepark.com by Bee Technologies.
Dochelp-An artificially intelligent medical diagnosis systemTejaswi Agarwal
Dochelp-An artificially intelligent medical diagnosis system is a project for the course on Artificial Intelligence for the Spring 2012 semester during the sophomore year.
Offline Handwritten Signature Identification and Verification using Multi-Res...CSCJournals
In this paper, we are proposing a new method for offline (static) handwritten signature identification and verification based on Gabor wavelet transform. The whole idea is offering a simple and robust method for extracting features based on Gabor Wavelet which the dependency of the method to the nationality of signer has been reduced to its minimal. After pre-processing stage that contains noise reduction and signature image normalisation by size and rotation, a virtual grid is placed on the signature image. Gabor wavelet coefficients with different frequencies and directions are computed on each points of this grid and then fed into a classifier. The shortest weighted distance has been used as the classifier. The weight that is used as the coefficient for computing the shortest distance is based on the distribution of instances in each of signature classes. As it was pointed out earlier, one of the advantages of this system is its capability of signature identification and verification of different nationalities; thus it has been tested on four signature dataset with different nationalities including Iranian, Turkish, South African and Spanish signatures. Experimental results and the comparison of the proposed system with other systems are consistent with desirable outcomes. Despite the use of the simplest method of classification i.e. the nearest neighbour, the proposed algorithm in comparison with other algorithms has very good capabilities. Comparing the results of our system with the accuracy of human\'s identification and verification, it shows that human identification is more accurate but our proposed system has a lower error rate in verification.
The state of the art in handwriting synthesisRanda Elanwar
In this paper we present a literature review about the recent trends in handwriting synthesis, highlighting the different generation processes and pointing out the challenges facing the researchers. Finally we are giving a conclusion about the scientific research collections presented, and summarizing our opinions to help move future work up to maturity
The document discusses a new approach for identifying the script of words in low-resolution images of display boards using texture features. It aims to identify 3 Indian scripts: Hindi, Kannada, and English. The proposed method extracts discrete cosine transform-based texture features from word images and uses a threshold-based function to classify the script. When evaluated on 800 word images, it achieved an overall accuracy of 85.44% and individual accuracies of 100% for Hindi, 70.33% for Kannada, and 86% for English. The method is robust to variations in fonts, character spacing, noise and other degradations.
The document discusses optical character recognition for Urdu handwriting. It introduces OCR and its applications. It then discusses earlier work on OCR systems that were font-specific. The document outlines the steps in OCR including image acquisition, preprocessing, segmentation, feature extraction, classification, and recognition. It provides an overview of the Urdu script and its variations. The document then summarizes research conducted on recognizing offline isolated Urdu characters using moment invariants and support vector machines. Other works discussed include an online and offline OCR system for Urdu using a segmentation-free approach, classifying Urdu ligatures using convolutional neural networks, and a segmentation-based approach for Urdu Nastaliq script recognition.
SPICE MODEL of C4D15120A LTspice Model (Professional Model) in SPICE PARKTsuyoshi Horigome
SPICE MODEL of C4D15120A LTspice Model (Professional Model) in SPICE PARK. English Version is http://www.spicepark.net. Japanese Version is http://www.spicepark.com by Bee Technologies.
FREE SPICE MODEL of 1SS272 in SPICE PARK. English Version is http://www.spicepark.net. Japanese Version is http://www.spicepark.com by Bee Technologies.
Dochelp-An artificially intelligent medical diagnosis systemTejaswi Agarwal
Dochelp-An artificially intelligent medical diagnosis system is a project for the course on Artificial Intelligence for the Spring 2012 semester during the sophomore year.
This document describes a method for off-line handwritten signature verification based on analyzing grey level texture features in the signature image. It uses statistical texture analysis methods like the co-occurrence matrix and local binary pattern to measure grey level variations, providing rotation and luminance invariance. Signatures are preprocessed to remove background and normalize for different ink pens. Texture features are extracted and used to train a support vector machine classifier to discriminate between genuine signatures and forgeries. The method is evaluated on standard databases and shows reasonable performance according to the state-of-the-art for off-line signature verification approaches.
Effective Implementation techniques in Offline Signature VerificationIOSR Journals
This document summarizes and compares the results of implementing three offline signature verification techniques: simple mask filtering, local binary pattern (LBP), and local directional pattern (LDP). The techniques were tested on a local signature database containing samples from multiple signers. For each technique, the original signature image was converted to grayscale and then filtered. Boundary extraction was performed and the filtered image was compared to signatures in the database to calculate a matching score. Observation showed that LDP produced more stable patterns and was less sensitive to pen variability compared to the other techniques.
Proposed Method for Off-line Signature Recognition and Verification using Neu...Editor IJMTER
Computers have become common and are used in almost every field including financial
transactions, thus providing additional security measures is necessary. According to consumer’s
expectations, these security measures must be cheap, reliable and un-intrusive to the authorized
person. The technique which meets these requirements is handwritten signature verification.
Signature verification technique has advantage over other biometric techniques: including voice, iris,
fingerprint, palm etc. as it is mostly used for daily routine procedures link banking operations,
document analysis, electronic funds transfer, access control and many other. Most importantly, it’s
easy and people are less likely to object it. Proposed technique involves using a new approach that
depends on a neural network which enables the user to recognize whether a signature is original or a
fraud. Scanned images are introduced into the computer, their quality is modified with the help of
image enhancement and noise reduction techniques, specific features are extracted and neural
network is trained, The different stages of the process includes: image pre-processing, feature
extraction and pattern recognition through neural networks.
This document summarizes a research paper on latent fingerprint matching using descriptor-based Hough transform. It describes an approach that consists of three main steps: 1) aligning two sets of minutiae points using a descriptor-based Hough transform, 2) establishing correspondences between minutiae points, and 3) computing a similarity score. The goal is to develop an algorithm for automatically matching latent fingerprints to rolled fingerprints using only minutiae information. Experimental results on a public dataset show it outperforms a commercial fingerprint matcher.
This document describes a system for offline signature verification. It begins with an introduction to the problem and approaches for online and offline verification. It then outlines the proposed approach, which involves preprocessing the signature image, extracting seven geometric features, training the system using sample signatures, and verifying new signatures by comparing their features to the training data. The key steps are preprocessing to remove noise, feature extraction of attributes like slant angle, aspect ratio, area, center of gravity, edge and cross points, and verification by calculating the Euclidean distance between a test signature's features and the trained template. Implementation details and results are also mentioned.
FORGED CHARACTER DETECTION DATASETS: PASSPORTS, DRIVING LICENCES AND VISA STI...ijaia
Forged documents specifically passport, driving licence and VISA stickers are used for fraud purposes
including robbery, theft and many more. So detecting forged characters from documents is a significantly
important and challenging task in digital forensic imaging. Forged characters detection has two big
challenges. First challenge is, data for forged characters detection is extremely difficult to get due to
several reasons including limited access of data, unlabeled data or work is done on private data. Second
challenge is, deep learning (DL) algorithms require labeled data, which poses a further challenge as
getting labeled is tedious, time-consuming, expensive and requires domain expertise. To end these issues,
in this paper we propose a novel algorithm, which generates the three datasets namely forged characters
detection for passport (FCD-P), forged characters detection for driving licence (FCD-D) and forged
characters detection for VISA stickers (FCD-V). To the best of our knowledge, we are the first to release
these datasets. The proposed algorithm starts by reading plain document images, simulates forging
simulation tasks on five different countries' passports, driving licences and VISA stickers. Then it keeps the
bounding boxes as a track of the forged characters as a labeling process. Furthermore, considering the
real world scenario, we performed the selected data augmentation accordingly. Regarding the stats of
datasets, each dataset consists of 15000 images having size of 950 x 550 of each. For further research
purpose we release our algorithm code 1 and, datasets i.e. FCD-P 2
, FCD-D 3 and FCD-V 4
.
An offline signature verification using pixels intensity levelsSalam Shah
Offline signature recognition has great importance in our day to day activities. Researchers are trying to use them as biometric identification in various areas like banks, security systems and for other identification purposes. Fingerprints, iris, thumb impression and face detection based biometrics are successfully used for identification of individuals because of their static nature. However, people’s signatures show variability that makes it difficult to recognize the original signatures correctly and to use them as biometrics. The handwritten signatures have importance in banks for cheque, credit card processing, legal and financial transactions, and the signatures are the main target of fraudulence. To deal with complex signatures, there should be a robust signature verification method in places such as banks that can correctly classify the signatures into genuine or forgery to avoid financial frauds. This paper, presents a pixels intensity level based offline signature verification model for the correct classification of signatures. To achieve the target, three statistical classifiers; Decision Tree (J48), probability based Naïve Bayes (NB tree) and Euclidean distance based k-Nearest Neighbor (IBk), are used.
For comparison of the accuracy rates of offline signatures with online signatures, three classifiers were applied on online signature database and achieved a 99.90% accuracy rate with decision tree (J48), 99.82% with Naïve Bayes Tree and 98.11% with K-Nearest Neighbor (with 10 fold cross validation). The results of offline signatures were 64.97% accuracy rate with decision tree (J48), 76.16% with Naïve Bayes Tree and 91.91% with k-Nearest Neighbor (IBk) (without forgeries). The accuracy rate dropped with the inclusion of forgery signatures as, 55.63% accuracy rate with decision tree (J48), 67.02% with Naïve Bayes Tree and 88.12% (with forgeries).
Design and implementation of optical character recognition using template mat...eSAT Journals
Abstract
Optical character recognition (OCR) is an efficient way of converting scanned image into machine code which can further edit. There are variety of methods have been implemented in the field of character recognition. This paper proposes Optical character recognition by using Template Matching. The templates formed, having variety of fonts and size .In this proposed system, Image pre-processing, Feature extraction and classification algorithms have been implemented so as to build an excellent character recognition technique for different scripts .Result of this approach is also discussed in this paper. This system is implemented in Matlab.
Keywords- OCR, Feature Extraction, Classification
This document discusses a handwriting recognition system that uses 3D discrete wavelet and multiwavelet transforms for feature extraction from Latin handwritten text. The proposed system performs preprocessing, feature extraction using 3D-DWT and 3D-DMWTCS transforms, pattern matching and classification using a minimum distance classifier, and postprocessing. The system achieves classification accuracies of 95.76% using 3D-DMWTCS and 94.05% using 3D-DWT on the Rimes database. The document provides background on handwriting recognition, typical models, and feature extraction methods including global, geometrical, topological, and statistical features.
Online Hand Written Character RecognitionIOSR Journals
This document discusses online handwritten character recognition. It begins by describing the differences between online and offline recognition systems. Online systems capture stroke order and timing information while writing, while offline systems analyze static images. The document then discusses challenges in recognition like variability between writers. It presents several previous works in online handwriting recognition. The document proposes a method for online recognition that uses shape, pixel density, and stroke movement template matching to identify characters. It describes preprocessing input and generating training templates to match against. Overall, the document outlines challenges in online handwriting recognition and proposes a template matching approach to address these challenges.
A Novel Automated Approach for Offline Signature Verification Based on Shape ...Editor IJCATR
This document presents a novel approach for offline signature verification based on a shape matrix. The approach divides a signature image into a grid of cells and represents each signature as a binary matrix indicating the presence of signature strokes in each cell. To verify a signature, the test signature's shape matrix is compared to enrolled signature matrices using a template matching approach. The document describes the preprocessing, shape matrix generation, and similarity scoring steps. An experiment on a dataset of 1000 signatures from 50 individuals achieved high accuracy and low false acceptance/rejection rates, indicating the approach is effective at verifying signatures while ignoring normal variations between genuine signatures.
This document presents a statistical-ANN hybrid technique for offline signature recognition. It extracts features from signatures using statistical approaches like invariant moment methods, and then classifies signatures using an artificial neural network (ANN). The system takes in signature images, preprocesses them, extracts features, and trains an ANN classifier. It can recognize and verify signatures on a test database with reasonable accuracy. The hybrid statistical-ANN approach aims to minimize intra-personal variations between genuine signatures while maximizing inter-personal variations to distinguish forgeries.
OFFLINE SIGNATURE VERIFICATION SYSTEM FOR BANK CHEQUES USING ZERNIKE MOMENTS,...ijaia
Handwritten signature is the most accepted and economical means of personnel authentication. It can be
verified using online or offline schemes. This paper proposes a signature verification model by combining
Zernike moments feature with circularity and aspect ratio. Unlike characters, signatures vary each time
because of its behavioural biometric property. Signatures can be identified based on their shape. Moments
are the good translational and scale invariant shape descriptors. The amplitude and the phase of Zernike
moments, circularity and aspect ratio of the signature are the features that are extracted and combined for
the verification purpose and are fed to the Feedforward Backpropagation Neural Network. This Neural
Network classifies the signature into genuine or forged. Experimental results reveal that this methodology
of combining zernike moments along with the two mentioned geometrical properties give higher accuracy
than using them individually. The combination of these feature vector yields a mean accuracy of 95.83%.
When this approach is compared with the literature, it proves to be more effective.
AUTOMATIC TRAINING DATA SYNTHESIS FOR HANDWRITING RECOGNITION USING THE STRUC...ijaia
The paper presents a novel technique called “Structural Crossing-Over” to synthesize qualified data for training machine learning-based handwriting recognition. The proposed technique can provide a greater variety of patterns of training data than the existing approaches such as elastic distortion and tangentbased affine transformation. A couple of training characters are chosen, then they are analyzed by their similar and different structures, and finally are crossed over to generate the new characters. The experiments are set to compare the performances of tangent-based affine transformation and the proposed approach in terms of the variety of generated characters and percent of recognition errors. The standard
MNIST corpus including 60,000 training characters and 10,000 test characters is employed in the
experiments. The proposed technique uses 1,000 characters to synthesize 60,000 characters, and then uses
these data to train and test the benchmark handwriting recognition system that exploits Histogram of
Gradient: HOG as features and Support Vector Machine: SVM as recognizer. The experimental result
yields 8.06% of errors. It significantly outperforms the tangent-based affine transformation and the
original MNIST training data, which are 11.74% and 16.55%, respectively.
OSPCV: Off-line Signature Verification using Principal Component VariancesIOSR Journals
This document presents an offline signature verification system called OSPCV (Offline Signature Verification using Principal Component Variances) that analyzes two features - pixel density and center of gravity distance. It describes the related work in signature verification, the proposed OSPCV algorithm, and experimental results showing it provides a notable improvement over existing systems. The OSPCV system overcomes intra-signature and inter-signature variations to produce a better equal error rate for differentiating genuine versus forged signatures.
This document discusses a proposed method for offline signature verification called OSPCV (Off-line Signature Verification using Principal Component Variances). The method extracts two features from signatures - pixel density and center of gravity distance. It then uses Principal Component Analysis to analyze the features and train a model using signature samples. When a new "test" signature is analyzed, it extracts the same two features and compares them to the trained model to determine if the signature is genuine or a forgery. The researchers believe this method provides better accuracy than existing offline signature verification systems, especially in differentiating between genuine and skilled forgery signatures. It aims to overcome challenges from intra-personal and inter-personal signature variations.
Offline handwritten signature identification using adaptive window positionin...sipij
The paper presents to address this challenge, we have proposed the use of Adaptive Window Positioning
technique which focuses on not just the meaning of the handwritten signature but also on the individuality
of the writer. This innovative technique divides the handwritten signature into 13 small windows of size nxn
(13x13). This size should be large enough to contain ample information about the style of the author and
small enough to ensure a good identification performance. The process was tested with a GPDS dataset
containing 4870 signature samples from 90 different writers by comparing the robust features of the test
signature with that of the user’s signature using an appropriate classifier. Experimental results reveal that
adaptive window positioning technique proved to be the efficient and reliable method for accurate
signature feature extraction for the identification of offline handwritten signatures .The contribution of this
technique can be used to detect signatures signed under emotional duress
An appraisal of offline signature verification techniquesSalam Shah
Biometrics is being commonly used nowadays for the identification and verification of humans everywhere in the world. In biometrics humans unique characteristics like palm, fingerprints, iris etc. are being used. Pattern Recognition and image processing are the major areas where research on signature verification is carried out. Hand written Signature of an individual is also unique and for identification of humans are being used and accepted specially in the banking and other financial transactions. The hand written signatures due to its importance are at target of fraudulence. In this paper we have surveyed different papers on techniques that are currently used for the identification and verification of Offline signatures.
Basically handwriting recognition can be divided into two parts as Offline handwriting recognition and Online handwriting recognition. Highly accurate output with predefined constraints can be given by Online handwriting recognition system as it is related to size of vocabulary and writer dependency, printed writing style etc. Hidden markov model increases the success rate of online recognition system. Online handwriting recognition gives additional time information which is not present in Offline system. A Markov process is a random prediction process whose future behavior rely only on its present state, does not depend on the past state. Which means it should satisfy the Markov condition. A Hidden markov model (HMM) is a statistical markov model. In HMM model the system being modeled is assumed to be a markov process with hidden states. Hidden Markov models (HMMs) can be viewed as extensions of discrete-state Markov processes. Human-machine interaction can be drastically getting improved as On-line handwriting recognition technology contains that capability. As instead of using keyboard any person can write anything by hand with the help of digital pen or any similar equipment would be more natural. HMM build a effective mathematical models for characterizing the variance both in time and signal space presented in speech signal.
This document summarizes a research paper that proposes using a Real-Coded Genetic Algorithm to design Unified Power Flow Controller (UPFC) damping controllers. The goal is to damp low frequency oscillations in power systems. The paper models a single-machine infinite-bus power system installed with a UPFC. It linearizes the system equations and formulates the controller design as an optimization problem to minimize oscillations. Simulation results comparing the proposed RCGA approach to conventional tuning are presented to demonstrate its effectiveness and robustness in damping power system oscillations.
The main-principles-of-text-to-speech-synthesis-systemCemal Ardil
This document discusses text-to-speech synthesis systems. It provides background on the history and development of such systems over three generations from 1962 to the present. It describes some of the main challenges in developing speech synthesis for different languages. The document then focuses on specifics of the Azerbaijani language and outlines the approach used in the text-to-speech synthesis system developed by the authors, which combines concatenative synthesis and formant synthesis methods.
More Related Content
Similar to Automatic authentication-of-handwritten-documents-via-low-density-pixel-measurements
This document describes a method for off-line handwritten signature verification based on analyzing grey level texture features in the signature image. It uses statistical texture analysis methods like the co-occurrence matrix and local binary pattern to measure grey level variations, providing rotation and luminance invariance. Signatures are preprocessed to remove background and normalize for different ink pens. Texture features are extracted and used to train a support vector machine classifier to discriminate between genuine signatures and forgeries. The method is evaluated on standard databases and shows reasonable performance according to the state-of-the-art for off-line signature verification approaches.
Effective Implementation techniques in Offline Signature VerificationIOSR Journals
This document summarizes and compares the results of implementing three offline signature verification techniques: simple mask filtering, local binary pattern (LBP), and local directional pattern (LDP). The techniques were tested on a local signature database containing samples from multiple signers. For each technique, the original signature image was converted to grayscale and then filtered. Boundary extraction was performed and the filtered image was compared to signatures in the database to calculate a matching score. Observation showed that LDP produced more stable patterns and was less sensitive to pen variability compared to the other techniques.
Proposed Method for Off-line Signature Recognition and Verification using Neu...Editor IJMTER
Computers have become common and are used in almost every field including financial
transactions, thus providing additional security measures is necessary. According to consumer’s
expectations, these security measures must be cheap, reliable and un-intrusive to the authorized
person. The technique which meets these requirements is handwritten signature verification.
Signature verification technique has advantage over other biometric techniques: including voice, iris,
fingerprint, palm etc. as it is mostly used for daily routine procedures link banking operations,
document analysis, electronic funds transfer, access control and many other. Most importantly, it’s
easy and people are less likely to object it. Proposed technique involves using a new approach that
depends on a neural network which enables the user to recognize whether a signature is original or a
fraud. Scanned images are introduced into the computer, their quality is modified with the help of
image enhancement and noise reduction techniques, specific features are extracted and neural
network is trained, The different stages of the process includes: image pre-processing, feature
extraction and pattern recognition through neural networks.
This document summarizes a research paper on latent fingerprint matching using descriptor-based Hough transform. It describes an approach that consists of three main steps: 1) aligning two sets of minutiae points using a descriptor-based Hough transform, 2) establishing correspondences between minutiae points, and 3) computing a similarity score. The goal is to develop an algorithm for automatically matching latent fingerprints to rolled fingerprints using only minutiae information. Experimental results on a public dataset show it outperforms a commercial fingerprint matcher.
This document describes a system for offline signature verification. It begins with an introduction to the problem and approaches for online and offline verification. It then outlines the proposed approach, which involves preprocessing the signature image, extracting seven geometric features, training the system using sample signatures, and verifying new signatures by comparing their features to the training data. The key steps are preprocessing to remove noise, feature extraction of attributes like slant angle, aspect ratio, area, center of gravity, edge and cross points, and verification by calculating the Euclidean distance between a test signature's features and the trained template. Implementation details and results are also mentioned.
FORGED CHARACTER DETECTION DATASETS: PASSPORTS, DRIVING LICENCES AND VISA STI...ijaia
Forged documents specifically passport, driving licence and VISA stickers are used for fraud purposes
including robbery, theft and many more. So detecting forged characters from documents is a significantly
important and challenging task in digital forensic imaging. Forged characters detection has two big
challenges. First challenge is, data for forged characters detection is extremely difficult to get due to
several reasons including limited access of data, unlabeled data or work is done on private data. Second
challenge is, deep learning (DL) algorithms require labeled data, which poses a further challenge as
getting labeled is tedious, time-consuming, expensive and requires domain expertise. To end these issues,
in this paper we propose a novel algorithm, which generates the three datasets namely forged characters
detection for passport (FCD-P), forged characters detection for driving licence (FCD-D) and forged
characters detection for VISA stickers (FCD-V). To the best of our knowledge, we are the first to release
these datasets. The proposed algorithm starts by reading plain document images, simulates forging
simulation tasks on five different countries' passports, driving licences and VISA stickers. Then it keeps the
bounding boxes as a track of the forged characters as a labeling process. Furthermore, considering the
real world scenario, we performed the selected data augmentation accordingly. Regarding the stats of
datasets, each dataset consists of 15000 images having size of 950 x 550 of each. For further research
purpose we release our algorithm code 1 and, datasets i.e. FCD-P 2
, FCD-D 3 and FCD-V 4
.
An offline signature verification using pixels intensity levelsSalam Shah
Offline signature recognition has great importance in our day to day activities. Researchers are trying to use them as biometric identification in various areas like banks, security systems and for other identification purposes. Fingerprints, iris, thumb impression and face detection based biometrics are successfully used for identification of individuals because of their static nature. However, people’s signatures show variability that makes it difficult to recognize the original signatures correctly and to use them as biometrics. The handwritten signatures have importance in banks for cheque, credit card processing, legal and financial transactions, and the signatures are the main target of fraudulence. To deal with complex signatures, there should be a robust signature verification method in places such as banks that can correctly classify the signatures into genuine or forgery to avoid financial frauds. This paper, presents a pixels intensity level based offline signature verification model for the correct classification of signatures. To achieve the target, three statistical classifiers; Decision Tree (J48), probability based Naïve Bayes (NB tree) and Euclidean distance based k-Nearest Neighbor (IBk), are used.
For comparison of the accuracy rates of offline signatures with online signatures, three classifiers were applied on online signature database and achieved a 99.90% accuracy rate with decision tree (J48), 99.82% with Naïve Bayes Tree and 98.11% with K-Nearest Neighbor (with 10 fold cross validation). The results of offline signatures were 64.97% accuracy rate with decision tree (J48), 76.16% with Naïve Bayes Tree and 91.91% with k-Nearest Neighbor (IBk) (without forgeries). The accuracy rate dropped with the inclusion of forgery signatures as, 55.63% accuracy rate with decision tree (J48), 67.02% with Naïve Bayes Tree and 88.12% (with forgeries).
Design and implementation of optical character recognition using template mat...eSAT Journals
Abstract
Optical character recognition (OCR) is an efficient way of converting scanned image into machine code which can further edit. There are variety of methods have been implemented in the field of character recognition. This paper proposes Optical character recognition by using Template Matching. The templates formed, having variety of fonts and size .In this proposed system, Image pre-processing, Feature extraction and classification algorithms have been implemented so as to build an excellent character recognition technique for different scripts .Result of this approach is also discussed in this paper. This system is implemented in Matlab.
Keywords- OCR, Feature Extraction, Classification
This document discusses a handwriting recognition system that uses 3D discrete wavelet and multiwavelet transforms for feature extraction from Latin handwritten text. The proposed system performs preprocessing, feature extraction using 3D-DWT and 3D-DMWTCS transforms, pattern matching and classification using a minimum distance classifier, and postprocessing. The system achieves classification accuracies of 95.76% using 3D-DMWTCS and 94.05% using 3D-DWT on the Rimes database. The document provides background on handwriting recognition, typical models, and feature extraction methods including global, geometrical, topological, and statistical features.
Online Hand Written Character RecognitionIOSR Journals
This document discusses online handwritten character recognition. It begins by describing the differences between online and offline recognition systems. Online systems capture stroke order and timing information while writing, while offline systems analyze static images. The document then discusses challenges in recognition like variability between writers. It presents several previous works in online handwriting recognition. The document proposes a method for online recognition that uses shape, pixel density, and stroke movement template matching to identify characters. It describes preprocessing input and generating training templates to match against. Overall, the document outlines challenges in online handwriting recognition and proposes a template matching approach to address these challenges.
A Novel Automated Approach for Offline Signature Verification Based on Shape ...Editor IJCATR
This document presents a novel approach for offline signature verification based on a shape matrix. The approach divides a signature image into a grid of cells and represents each signature as a binary matrix indicating the presence of signature strokes in each cell. To verify a signature, the test signature's shape matrix is compared to enrolled signature matrices using a template matching approach. The document describes the preprocessing, shape matrix generation, and similarity scoring steps. An experiment on a dataset of 1000 signatures from 50 individuals achieved high accuracy and low false acceptance/rejection rates, indicating the approach is effective at verifying signatures while ignoring normal variations between genuine signatures.
This document presents a statistical-ANN hybrid technique for offline signature recognition. It extracts features from signatures using statistical approaches like invariant moment methods, and then classifies signatures using an artificial neural network (ANN). The system takes in signature images, preprocesses them, extracts features, and trains an ANN classifier. It can recognize and verify signatures on a test database with reasonable accuracy. The hybrid statistical-ANN approach aims to minimize intra-personal variations between genuine signatures while maximizing inter-personal variations to distinguish forgeries.
OFFLINE SIGNATURE VERIFICATION SYSTEM FOR BANK CHEQUES USING ZERNIKE MOMENTS,...ijaia
Handwritten signature is the most accepted and economical means of personnel authentication. It can be
verified using online or offline schemes. This paper proposes a signature verification model by combining
Zernike moments feature with circularity and aspect ratio. Unlike characters, signatures vary each time
because of its behavioural biometric property. Signatures can be identified based on their shape. Moments
are the good translational and scale invariant shape descriptors. The amplitude and the phase of Zernike
moments, circularity and aspect ratio of the signature are the features that are extracted and combined for
the verification purpose and are fed to the Feedforward Backpropagation Neural Network. This Neural
Network classifies the signature into genuine or forged. Experimental results reveal that this methodology
of combining zernike moments along with the two mentioned geometrical properties give higher accuracy
than using them individually. The combination of these feature vector yields a mean accuracy of 95.83%.
When this approach is compared with the literature, it proves to be more effective.
AUTOMATIC TRAINING DATA SYNTHESIS FOR HANDWRITING RECOGNITION USING THE STRUC...ijaia
The paper presents a novel technique called “Structural Crossing-Over” to synthesize qualified data for training machine learning-based handwriting recognition. The proposed technique can provide a greater variety of patterns of training data than the existing approaches such as elastic distortion and tangentbased affine transformation. A couple of training characters are chosen, then they are analyzed by their similar and different structures, and finally are crossed over to generate the new characters. The experiments are set to compare the performances of tangent-based affine transformation and the proposed approach in terms of the variety of generated characters and percent of recognition errors. The standard
MNIST corpus including 60,000 training characters and 10,000 test characters is employed in the
experiments. The proposed technique uses 1,000 characters to synthesize 60,000 characters, and then uses
these data to train and test the benchmark handwriting recognition system that exploits Histogram of
Gradient: HOG as features and Support Vector Machine: SVM as recognizer. The experimental result
yields 8.06% of errors. It significantly outperforms the tangent-based affine transformation and the
original MNIST training data, which are 11.74% and 16.55%, respectively.
OSPCV: Off-line Signature Verification using Principal Component VariancesIOSR Journals
This document presents an offline signature verification system called OSPCV (Offline Signature Verification using Principal Component Variances) that analyzes two features - pixel density and center of gravity distance. It describes the related work in signature verification, the proposed OSPCV algorithm, and experimental results showing it provides a notable improvement over existing systems. The OSPCV system overcomes intra-signature and inter-signature variations to produce a better equal error rate for differentiating genuine versus forged signatures.
This document discusses a proposed method for offline signature verification called OSPCV (Off-line Signature Verification using Principal Component Variances). The method extracts two features from signatures - pixel density and center of gravity distance. It then uses Principal Component Analysis to analyze the features and train a model using signature samples. When a new "test" signature is analyzed, it extracts the same two features and compares them to the trained model to determine if the signature is genuine or a forgery. The researchers believe this method provides better accuracy than existing offline signature verification systems, especially in differentiating between genuine and skilled forgery signatures. It aims to overcome challenges from intra-personal and inter-personal signature variations.
Offline handwritten signature identification using adaptive window positionin...sipij
The paper presents to address this challenge, we have proposed the use of Adaptive Window Positioning
technique which focuses on not just the meaning of the handwritten signature but also on the individuality
of the writer. This innovative technique divides the handwritten signature into 13 small windows of size nxn
(13x13). This size should be large enough to contain ample information about the style of the author and
small enough to ensure a good identification performance. The process was tested with a GPDS dataset
containing 4870 signature samples from 90 different writers by comparing the robust features of the test
signature with that of the user’s signature using an appropriate classifier. Experimental results reveal that
adaptive window positioning technique proved to be the efficient and reliable method for accurate
signature feature extraction for the identification of offline handwritten signatures .The contribution of this
technique can be used to detect signatures signed under emotional duress
An appraisal of offline signature verification techniquesSalam Shah
Biometrics is being commonly used nowadays for the identification and verification of humans everywhere in the world. In biometrics humans unique characteristics like palm, fingerprints, iris etc. are being used. Pattern Recognition and image processing are the major areas where research on signature verification is carried out. Hand written Signature of an individual is also unique and for identification of humans are being used and accepted specially in the banking and other financial transactions. The hand written signatures due to its importance are at target of fraudulence. In this paper we have surveyed different papers on techniques that are currently used for the identification and verification of Offline signatures.
Basically handwriting recognition can be divided into two parts as Offline handwriting recognition and Online handwriting recognition. Highly accurate output with predefined constraints can be given by Online handwriting recognition system as it is related to size of vocabulary and writer dependency, printed writing style etc. Hidden markov model increases the success rate of online recognition system. Online handwriting recognition gives additional time information which is not present in Offline system. A Markov process is a random prediction process whose future behavior rely only on its present state, does not depend on the past state. Which means it should satisfy the Markov condition. A Hidden markov model (HMM) is a statistical markov model. In HMM model the system being modeled is assumed to be a markov process with hidden states. Hidden Markov models (HMMs) can be viewed as extensions of discrete-state Markov processes. Human-machine interaction can be drastically getting improved as On-line handwriting recognition technology contains that capability. As instead of using keyboard any person can write anything by hand with the help of digital pen or any similar equipment would be more natural. HMM build a effective mathematical models for characterizing the variance both in time and signal space presented in speech signal.
Similar to Automatic authentication-of-handwritten-documents-via-low-density-pixel-measurements (20)
This document summarizes a research paper that proposes using a Real-Coded Genetic Algorithm to design Unified Power Flow Controller (UPFC) damping controllers. The goal is to damp low frequency oscillations in power systems. The paper models a single-machine infinite-bus power system installed with a UPFC. It linearizes the system equations and formulates the controller design as an optimization problem to minimize oscillations. Simulation results comparing the proposed RCGA approach to conventional tuning are presented to demonstrate its effectiveness and robustness in damping power system oscillations.
The main-principles-of-text-to-speech-synthesis-systemCemal Ardil
This document discusses text-to-speech synthesis systems. It provides background on the history and development of such systems over three generations from 1962 to the present. It describes some of the main challenges in developing speech synthesis for different languages. The document then focuses on specifics of the Azerbaijani language and outlines the approach used in the text-to-speech synthesis system developed by the authors, which combines concatenative synthesis and formant synthesis methods.
The feedback-control-for-distributed-systemsCemal Ardil
The document summarizes a study on feedback control synthesis for distributed systems. The study proposes a zone control approach, where the state space is partitioned into zones defined by observable points. Control actions are piecewise constant functions that only change when the system transitions between zones. An optimization problem is formulated to determine the optimal constant control value for each zone. Gradient formulas are derived to solve this using numerical optimization methods. The zone control approach was tested on heat exchanger process control problems and showed more robust performance than alternative methods.
System overflow blocking-transients-for-queues-with-batch-arrivals-using-a-fa...Cemal Ardil
This document summarizes a research paper that analyzes the transient behavior of the overflow probability in a queuing system with fixed-size batch arrivals. It introduces a set of polynomials that generalize Chebyshev polynomials and can be used to assess the transient behavior. The key findings are:
1≤k ≤ B
k≥B+1
which is just the generating function of the Chebyshev
polynomials of the second kind.
Furthermore, if we consider the special case when B = 1 in
(9), and make the substitution x → 2x, we obtain
k =0
0≤k ≤ B
Pk −1
λ
μ
Sonic localization-cues-for-classrooms-a-structural-model-proposalCemal Ardil
The document describes a proposed structural model for sonic localization cues in classrooms. It discusses two primary cues for localization - interaural time difference (ITD) and interaural level difference (ILD) created by sounds reaching each ear. While these cues provide azimuth information, they do not provide elevation information. Elevation information is provided by spectral filtering effects of the head, torso and outer ears (pinnae) known as the head related transfer function (HRTF). The proposed structural model aims to produce well-controlled horizontal and vertical localization cues through a signal processing model of the HRTF that mimics how sounds interact with the body. The effectiveness of the model is tested through synthesized spatial audio experiments with human subjects
This document summarizes a new method for designing robust fuzzy observers for nonlinear systems based on Takagi-Sugeno fuzzy models. The method uses linear matrix inequalities to design observers that minimize the H-norm of the closed loop system, providing a measure of robustness and disturbance attenuation. The observer design method is similar to existing parallel distributed compensation controller design methods, making it possible to adapt controller design techniques for observer design. The observer estimates system states and outputs based on measured outputs and system inputs while attenuating effects of disturbances and uncertainties.
This document discusses evaluating the response quality of heterogeneous question answering systems. It begins by noting the lack of standard evaluation metrics for systems that use natural language understanding and reasoning to answer questions, as opposed to just information retrieval. It proposes a "black-box" approach to evaluate response quality by observing system responses, developing a classification scheme to categorize responses, and assigning scores. As a demonstration, it applies this approach to evaluate three example systems (AnswerBus, START, and NaLURI) on a set of questions about cyberlaw.
The document describes two methods for reducing the order of linear time-invariant systems: Routh approximation and particle swarm optimization (PSO). Routh approximation determines the denominator of the reduced order model using a Routh array, while retaining time moments or Markov parameters to determine the numerator. PSO reduces order by minimizing the integral squared error between responses of the original and reduced models, adjusting numerator and denominator coefficients. The methods are illustrated on examples, with Routh approximation providing stability guarantees when applied to stable systems.
Real coded-genetic-algorithm-for-robust-power-system-stabilizer-designCemal Ardil
This document summarizes a research paper that uses a real-coded genetic algorithm to optimize the design of power system stabilizers. The algorithm is applied to both single-machine and multi-machine power systems. The goal is to minimize rotor speed deviations and improve stability under disturbances. Simulation results show the proposed controller provides effective damping of low frequency oscillations across different operating conditions.
This summary provides the key points about the document in 3 sentences:
The document presents a method for obtaining the exact probability of error for block codes using soft-decision decoding and the eigenstructure of the code correlation matrix. It shows that under a unitary transformation, the performance evaluation of a block code becomes a one-dimensional problem involving only the dominant eigenvalue and its corresponding eigenvector. Simulation results demonstrate good agreement with the analysis, validating the method for computing the bit error rate of block codes based on the properties of the code correlation matrix.
This document presents an optimal supplementary damping controller design for Thyristor Controlled Series Compensator (TCSC) using Real-Coded Genetic Algorithm (RCGA). TCSC is capable of improving power system stability by modulating reactance during disturbances. The document proposes using a multi-objective fitness function consisting of damping factors and real parts of eigenvalues to optimize the parameters of a TCSC-based supplementary damping controller using RCGA. Simulation results presented show the effectiveness of the proposed controller over a wide range of operating conditions and disturbances.
This document presents a method for generating optimal straight line trajectories in 3D space using an algorithm called the Bounded Deviation Algorithm (BDA). BDA approximates a straight line trajectory between two points by iteratively inserting knot points to minimize the deviation between the actual trajectory and the joint space trajectory. The document provides the mathematical formulation and simulation results of applying BDA to generate a straight line trajectory for a 5-axis articulated robot between two specified points.
On the-optimal-number-of-smart-dust-particlesCemal Ardil
This document discusses optimizing the number of smart dust particles used to generate weather maps. It addresses two main challenges: 1) how to match signals from smart dust particles to receivers given atmospheric constraints, and 2) what is the optimal number of particles needed to generate precise and cost-effective 3D maps. The document presents an algorithm to optimally match particles to receivers in O(n*m) time by framing it as a maximal bipartite graph matching problem. It also develops mathematics to prove a conjecture that the optimal number of particles is approximately 1/ε, where ε is the drift error.
On the-joint-optimization-of-performance-and-power-consumption-in-data-centersCemal Ardil
The document summarizes research on jointly optimizing performance and power consumption in data centers. It models the process of mapping tasks in a data center onto machines as a multi-objective problem to minimize both energy consumption and response time (makespan), subject to deadline and architectural constraints. It proposes using a simple goal programming technique that guarantees Pareto optimal solutions with good convergence. Simulation results show the technique achieves superior performance compared to other approaches and is competitive with optimal solutions for small-scale problems.
On the-approximate-solution-of-a-nonlinear-singular-integral-equationCemal Ardil
This document summarizes a study on finding approximate solutions to nonlinear singular integral equations. The study proves the existence and uniqueness of solutions to such equations defined on bounded regions of the complex plane. It then presents a method for finding approximate solutions using an iterative fixed-point principle approach. Nonlinear singular integral equations have many applications in fields like elasticity, fluid mechanics, and mathematical physics. The study contributes to improving methods for solving these important types of equations.
On problem-of-parameters-identification-of-dynamic-objectCemal Ardil
This document discusses methods for identifying parameters of dynamic objects described by systems of ordinary differential equations. Specifically, it addresses problems with multiple initial boundary conditions that are not shared across points.
The paper proposes a new "conditions shift" method to transfer the initial boundary conditions in a way that eliminates differential links and multipoint conditions. This reduces the parameter identification problem to solving either an algebraic system or a quadratic programming problem.
Two cases are considered: case A where the number of conditions equals the number of conditionally free parameters, resulting in a single parameter vector solution. Case B where additional conditions on the parameters are needed in the form of equalities or inequalities, resulting in an optimization problem to select optimal parameter values.
This document summarizes a research article about numerical modeling of gas turbine engines. The researchers developed mathematical models and numerical methods to calculate the stationary and quasi-stationary temperature fields of gas turbine blades with convective cooling. They combined the boundary integral equation method and finite difference method to solve this problem. The researchers proved the validity of these methods through theorems and estimates. They were able to visualize the temperature profiles using methods like least squares fitting with automatic interpolation, spline smoothing, and neural networks. The reliability of the numerical methods was confirmed through calculations and experimental tests of heat transfer characteristics on gas turbine nozzle blades.
New technologies-for-modeling-of-gas-turbine-cooled-bladesCemal Ardil
The document describes new technologies for modeling gas turbine cooled blades, including:
1) Developing mathematical models and numerical methods using the boundary integral equation method (BIEM) and finite difference method (FDM) to calculate the stationary and quasi-stationary temperature field of a blade profile with convective cooling.
2) Using splines, smooth interpolation, and neural networks for visualization of blade profiles.
3) Validating the designed methods through computational and experimental investigations of heat and hydraulic characteristics of a gas turbine nozzle blade.
This document discusses using neuro-fuzzy networks to identify parameters for mathematical models of geofields. It proposes a new technique using fuzzy neural networks that can be applied even when data is limited and uncertain in the early stages of modeling. A numerical example is provided to demonstrate the identification of parameters for a regression equation model of a geofield using a fuzzy neural network structure. The network is trained on experimental fuzzy statistical data to determine values for the regression coefficients that satisfy the data. The technique is concluded to have advantages over traditional statistical methods as it can be applied regardless of the parameter distribution and is well-suited for cases with insufficient data in early modeling stages.
This document presents a new multivariate fuzzy time series forecasting method to predict car road accidents. The method uses four secondary factors (number killed, mortally wounded, died 30 days after accident, severely wounded, and lightly casualties) along with the main factor of total annual car accidents in Belgium from 1974 to 2004. The new method establishes fuzzy logical relationships between the factors to generate forecasts. Experimental results show the proposed method performs better than existing fuzzy time series forecasting approaches at predicting car accidents. Actuaries can use this kind of multivariate fuzzy time series analysis to help define insurance premiums and underwriting.
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
1. World Academy of Science, Engineering and Technology
International Journal of Computer, Information Science and Engineering Vol:2 No:7, 2008
Automatic Authentication of Handwritten
Documents via Low Density Pixel Measurements
Abhijit Mitra, Pranab Kumar Banerjee and Cemal Ardil
injuries, illness, temperature, age and emotional state as well
as external factors such as alcohols and drugs [3] – an original
piece of writing has many natural personal characteristics like
cursiveness, ballistic rhythm etc. But a forge sample, however
skillful it may be, always lacks this rhythm and has poor line
quality. Hence the features to provide efficient basis for verification for the case of offline authentication are the non-natural
characteristics like hesitation, patching, retouching etc., which
are generally concerned with the gray levels of the signature
(and lost if we deal with the signature as a binary image). In
the 1980s, Ammar et. al. [6], aiming at skilled forgery detection, used such feature sets. They considered the geometric
information about the letters far less informative and the main
emphasis was given on high-pressure regions which were actually the dark pixels in the signature image corresponding to
high-pressure points on the writing trace. There, the focus was
on the ratio of the number of dark pixels to the total number
of pixels in the signature image, and it was one of the first
attempts to consider dynamic information in the static image
for verification. Subsequently, many other ideas were developed for skilled forgery verification like the orientations of the
gradient vectors at each pixel of the signature [9], a fuzzy technique to add some pseudo-dynamic information such as pen-up
and pen-down events [10], curve comparison algorithms [11],
topological features like branch point, crossing point [12] etc.
However, to the best of our knowledge, none of these works
have given the emphasis on low pen-tip pressure points, or, in
other words, low density pixel measurement, which can well
be another basis for distinction between genuine and skillful
forgery samples.
Abstract— We introduce an effective approach for automatic offline authentication of handwritten samples where the forgeries are skillfully done,
i.e., the true and forgery sample appearances are almost alike. Subtle details
of temporal information used in online verification are not available offline
and are also hard to recover robustly. Thus the spatial dynamic information
like the pen-tip pressure characteristics are considered, emphasizing on the
extraction of low density pixels. The points result from the ballistic rhythm
of a genuine signature which a forgery, however skillful that may be, always
lacks. Ten effective features, including these low density points and density ratio, are proposed to make the distinction between a true and a forgery
sample. An adaptive decision criteria is also derived for better verification
judgements.
International Science Index 19, 2008 waset.org/publications/6001
Keywords— Handwritten document verification, Skilled forgeries, Low
density pixels, Adaptive decision boundary.
I. INTRODUCTION
ANDWRITTEN document authentication, in general, attempts to confirm that a given written sample of a person
is genuine, or equivalently, to identify a questioned sample as
a forgery. Offline handwriting authentication [1]- [7] is considered more difficult than online verification [8] due to the
lack of dynamic information like the order of strokes, speed
of signing, acceleration etc. This makes it difficult to define
effective global and local features for verification purposes
and thus the field is not as developed as the online detection.
Nevertheless, the available common factors in the offline authentication such as slant writing, relative height of small and
tall letters etc., have been used successfully in most of the
reported works in free-hand forgery detection [7]. However,
the skilled forgeries, often subclassified into traced and simulated forgeries, involve attempting to mimic the style of the
writer and thus can be difficult to detect with only these static
features. The main problem comes in designing a feature extraction method which gives stable features for the genuine
written samples despite their inevitable variations, and salient
features for the forgeries even if the imitations are skillfully
done. Even though a genuine writer can never produce exactly the same handwriting twice, e.g., signatures, and many
factors can affect signatures and other handwriting including
H
In this paper, we propose an effective approach of automatic
handwriting authentication via low density pixel measurements
with an adaptive threshold as decision boundary. For simplicity, we deal only with handwritten signatures in the proposed
scheme, which can well be extended to any piece of handwritten samples without the loss of generality. Towards this, the
low and high density pixel percentages are computed and ten
effective features are proposed along with the ratio of above
mentioned density percentages. An adaptive decision criteria
is introduced then, leading towards betterment of system reliability. The simulation studies also exhibit better results than
most of the existing schemes.
Manuscript received July 12, 2005.
A. Mitra is with the Department of Electronics and Communication Engineering, Indian Institute of Technology (IIT) Guwahati, North Guwahati 781039, India. E-mail: a.mitra@iitg.ernet.in.
P. K. Banerjee is with the Department of Electronics and TeleCommunication Engineering, Jadavpur University, Kolkata - 700032, India.
E-mail: pkbanj@rediffmail.com.
C. Ardil is with the Azerbaijan National Academy of Aviation, Baku, Azerbaijan. E-mail: cemalardil@gmail.com.
The paper is organized as follows. Section II and III briefly
discuss about the types of forgeries and signature bank respectively. The measurement of pixel densities are dealt with in
details in Section IV. The authentication process is given in
43
2. World Academy of Science, Engineering and Technology
International Journal of Computer, Information Science and Engineering Vol:2 No:7, 2008
Section V, where, in particular, ten different features are given
with the deviation measurement from these, followed by the
proposed adaptive decision boundary. Section VI discusses
about the experimental results and the paper is concluded in
Section VII.
International Science Index 19, 2008 waset.org/publications/6001
II. CLASSIFICATION OF FORGE DOCUMENTS
Forge documents are generally classified into two types,
namely, free-hand and skilled. These are discussed below.
Free-hand forgeries [7] are again subclassified into random
and simple forgeries. When a forger simply uses an invalid
signature without any prior knowledge of the authenticated
person’s name or style of writing, it is classified as random
forgery. These are the simplest to detect because their characteristics differ globally from the genuine signature. Simple
forgeries involve using the writer’s name without any a-priori
information of the genuine signature style. A majority of forgeries are free-hand forgeries, but are often overlooked when
there are massive numbers of documents to be processed.
Skilled forgeries [5] on the other hand, with the further subclassification as traced and simulated forgeries, are almost
alike the true samples. Simulated forgeries are those in which
the forger imitates the original signature style from his/her
memory, while a traced forgery is a fraudulent signature which
has been executed by actually following the outline of a genuine signature with a writing instrument. Our work is aimed
towards detecting these traced and simulated signatures.
Fig. 1 An example of selecting TL and TU .
A. Threshold Selection
From the original gray level density information of the signature (i.e., the normalized histogram), we select a lower
threshold point (TL ) and an upper threshold point (TU ) adaptively at the gray levels which correspond to the lower and
1
upper p2 points respectively of the peak frequency of the
same normalized histogram of the signature in question. An
example of this selection procedure is given in Fig. 1, where
M ax represents the maximum number of pixels at a particular
gray level in any given signature.
III. SIGNATURE DATA BANK
The signature data bank consists of 200 genuine and 200
forgery samples. True samples were written by 20 different
persons as their own signatures with 10 samples each. Forgeries were written by 4 different forgers simulating and tracing
original samples of those 20 persons. All samples were written using the same ball pen in a horizontally oriented limited
space in order to elude the disparity of gray levels among
several genuine signatures of the same person. The signature
image is dealt with as a noise-free gray image where a dark
pixel means high-pressure pixel. All the samples are scanned
within a limited space of 256 £ 512 pixels and are digitized to
matrix A = [a]ij , with each aij having one of the values from
28 gray levels. In the sequel, we would denote the maximum
and minimum density pixels of any signature by amax and
amin respectively.
B. Low Density Pixel Percentage (LDPP)
Low density pixels (LDP) are those signature pixels which
have gray level values lower than TL . LDPs can be extracted
by the relation
½
1 8 aij · TL
aLDP =
(1)
ij
0 otherwise
where aLDP represent the LD pixels. We can now define a
ij
low density pixel percentage (LDPP) in a signature as the ratio
of LDPs to the binarized signature area as
PM PN
i=1
j=1
LDP P = PM PN
i=1
IV. MEASUREMENT OF PIXEL DENSITY FEATURES
aLDP
ij
j=1
bin
aij
£ 100
(2)
with abin being the binarized image [13] pixels.
ij
In this section, we develop the extraction method of low
and high density pen-tip points and respective density ratios.
In order to distinguish among different density regions, two
suitable threshold points are defined, one for high density pixels and the other for low density points, which are discussed
below.
C. High Density Pixel Percentage (HDPP)
High density pixels (HDP) are signature points which have
gray level values higher than TU . HDPs can be extracted [6]
by the following relation:
½
1 8 aij ¸ TU
aHDP =
(3)
ij
0 otherwise
44
3. World Academy of Science, Engineering and Technology
International Journal of Computer, Information Science and Engineering Vol:2 No:7, 2008
Fig. 3 Comparison of low density pixels of the same true and forgery samples.
where aHDP represent the HD pixels. Here we define a high
ij
density pixel percentage (HDPP) as the ratio of HDPs to the
binarized signature area and express it as
International Science Index 19, 2008 waset.org/publications/6001
Fig. 2 Comparison of binarized images of a true and a forgery signature.
Á1 ; Á2 ; Á3 : Vertical position of the peak frequency in the vertical projection of the binarized image, HDP and LDP, respectively.
Á4 ; Á5 ; Á6 : Horizontal position of the peak frequency in the
horizontal projection of the binarized image, HDP and LDP,
respectively.
Á7 : High to low density ratio, i.e., HDPP/LDPP.
Á8 : Lower threshold point computed from the pressure histogram (TL ).
Á9 : Dynamic range of the siganture pixel values, i.e., amax ¡
amin .
Á10 : Aspect ratio of the signature, i.e., length/width ratio of
just the signature area.
PM PN
i=1
j=1
HDP P = PM PN
i=1
HDP
aij
j=1
bin
aij
£ 100
(4)
where, in both the cases, M £ N denotes the total signature
area. Applying the aforesaid adaptive threshold selection procedure to the density ratios, we obtained 22 · HDP P · 41
but in the case of LDPP, the range was specified by 10 ·
LDP P · 42, where, most of the forgery samples have shown
very poor values of LDPP. It was also observed in several signatures that almost same values of HDPPs were obtained from
both the cases although the positions of HDPs for an original
and a forgery sample have differed significantly. However, a
forger could never match the value of LDPP obtained from a
true sample. The observation is made clear in Fig. 2 and Fig.
3, where, Fig. 2 shows the almost similar binarized images of
a true and a forgery sample, while Fig. 3 exhibits a significant
mismatch between the low density points of the same samples.
B. Deviation Measurement
The total deviation of an unknown sample is measured by
v
u
N
u 1 X (Ái ¡ ¹i )2
DRMS = t
(5)
N i=1 ¹N Fi
where N is the number of used features (e.g., here N = 10),
¹NFi is a normalization factor of any ith feature, computed
by
v
u
T
u1 X
t
¹N Fi =
(Ái (j) ¡ ¹i )2
(6)
T j=1
V. VERIFICATION PROCEDURE
Finding out a suitable feature set to establish a reasonable
distinction basis between original and forgery samples is a
difficult task. We, however, attempt the same with the following set, chosen by several observations with different kind of
signatures.
A. Feature Set
with T being the total number of known original samples for
a particular person and ¹i is the mean of Ái for known true
samples, i.e.,
T
1X
¹i =
Ái (j):
(7)
T j=1
Ten features (Á1 to Á10 ) have been used for verification
purpose. The selection of these features is based on the experimental observation and are given below.
The deviation parameter DRMS is the key factor behind the
verification criteria. If we now define DTmax as the maximum
deviation in the true samples of a given signature, DFmin as
45
4. World Academy of Science, Engineering and Technology
International Journal of Computer, Information Science and Engineering Vol:2 No:7, 2008
TABLE I
OPERATIONAL FLOW OF PROPOSED ADAPTIVE DECISION CRITERIA.
————————————————————————
1. Initialization:
p(0) = 0, ¸ = 0:01 (in this case), c = 0.
Compute SR(0).
————————————————————————
2. Loop operation:
For n = 1 to ¾ 2 =¸, do:
(a)c à c + ¸.
(b)Compute Vth .
(c)Compute SR(n).
(d)p(n + 1) = p(n) + ¸sgnfSR(n) ¡ SR(n ¡ 1)g.
(e)Store p(n).
Loop complete.
————————————————————————
3. Postprocessing:
copt = maxfp(n) j 8ng.
c à copt .
Find Vth and SR with this value.
Fig. 4 Relation among PCA, PCR and SR with different values of c for a
particular set of genuine and forgery signatures.
sep
extra safety factor by setting Vthresh = DTmax + p2 . This
model, however, couldn’t ensure a reasonable system reliability as Dsep doesn’t come out to be positive all the time. We
thus use a flexible threshold model by employing an observational formula to calculate Vth , and is given as
D
International Science Index 19, 2008 waset.org/publications/6001
————————————————————————
the minimum deviation in the forgery samples of that person
and Dsep as the minimum distance [6] separating original and
forgery samples of the same person, then we can write
Dsep = DFmin ¡ DTmax :
Vth = DTmax (1 +
(8)
c
) ¡ ¹D
¾2
(9)
where ¹D is the mean of the deviation DRMS computed on
the set of known genuine signatures of the person in question,
¾2 is the variance of the same set and the variable coefficient
c, which can vary within (0; ¾ 2 ], is set on a particular value
(copt ) that corresponds to the best system reliability (SR). The
optimum value copt is found out with the following adaptive
formaula.
A parameter p, updated with every nth iteration, is initially
set to p(0) = 0 and then adapted as
In this case, the efficiency of the chosen feature set stems from
the fact that it has ensured DFmin > DTmax , i.e., Dsep > 0,
for most of the cases.
C. Authentication Criteria
Having all the above parameters computed, we introduce a
threshold value (Vth ) closely related with the aforesaid deviations for the practical purpose of verification and the decision
is taken as follows.
² If DRMS < Vth , the input sample is considered to be a
true sample.
² If DRMS ¸ Vth , the input sample is judged to be a
forgery sample.
The threshold parameter Vth , therefore, must be a preestablished one. After investigating a few aspects, an adaptive
decision criteria has been followed to select Vth .
p(n + 1) = p(n) + ¸sgnfSR(n) ¡ SR(n ¡ 1)g
(10)
where SR(n) and SR(n ¡ 1) denote the system reliability
value at any nth and (n ¡ 1)th iteration respectively, sgn(:)
indicates the conventional sign function and ¸ is the step size
according to which p(n) is either incremented or decremented.
This ¸ can be set with respect to the rate of change of c. With
this equation, copt is assigned the following value
copt = maxfp(n) j 8ng
(11)
and is passed on to eq. (9) to set c for having the final Vth
value. The entire decision criteria is summarized in Table I.
C.1 Adaptive decision criteria for selecting Vth :
In [6], a simple threshold has been proposed by assigning
Vth to the value DTmax , i.e., Vthresh = DTmax . This couldn’t
converge to good results for such a simplified model. Another
modified threshold has also been taken into account to give an
VI. EXPERIMENTAL RESULTS
The verification results on the real-life set are collectively
judged by percentage of correct acceptance (PCA), percentage
46
5. World Academy of Science, Engineering and Technology
International Journal of Computer, Information Science and Engineering Vol:2 No:7, 2008
of correct rejection (PCR) and system reliability (SR). PCA
is defined as the ratio of the number of the accepted original
samples and the number of known original samples. PCR is
the ratio of number of rejected forgery samples and number
of known forgery samples, while SR is defined as the average
of these two (all of these three ratios should be multiplied by
100 to get percentage values). The proposed flexible threshold along with simple and modified ones have been applied
to the entire signature data bank and the results have been
calculated on an average basis. For the simple threshold, we
obtained PCA=92, PCR=95, SR=93.5. For modified threshold, the values were PCA=94.5, PCR=95, SR=94.75, and for
the proposed flexible threshold case with adaptive decision,
PCA=97.5, PCR=96, SR=96.75. Fig. 4 shows that selecting
Vth in the case of flexible threshold is not very critical as
SR > 95 can be easily obtained for a considerable range of c.
[10] C. Simon, E. Levrat, R. Sabourin and J. Bremont, “A Fuzzy Perceptron for Offline Handwritten Signature Verification,” in Proc. Brazilian
Symp. Document Image Analysis, 1997, pp. 261-272.
[11] F. Nouboud and R. Plamondon, “Global parameters and curves for offline signature verification,” in Proc. Int. Workshop on Frontiers in
Handwriting Recognition, 1994, pp. 145-155.
[12] K. Han and K. Sethi, “Signature Identification via Local Association
of Features,” in Proc. Int. Conf. Document Analysis and Recognition,
1995, pp. 187-190.
[13] J. R. Ullman, Pattern Recognition Techniques. New York: CraneRussak, 1973.
International Science Index 19, 2008 waset.org/publications/6001
VII. CONCLUSIONS
In this paper, a method for offline handwritten document
authentication has been proposed, where the true and forgery
samples were almost alike. Along with the high density pentip points, we have introduced the notion of low density points
which have shown effectivity for such a problem by indicating
the most important distinction with respect to the simulated or
traced signatures, i.e., for non-natural signature characteristics. The features have been chosen very carefully so that the
local characteristics of a forgery sample, like lack of ballistic
rhythm and poor line quality, can easily be detected in our
scheme. Experimental results have also supported the effectiveness of pressure regions, specially for low density pixels.
The LDPs are expected to find their adequate utility in forensic
applications in the future study.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
B. Fang et al., “Offline Signature Verifiaction by the Analysis of Cursive
Strokes,” Int. J. Pattern Recognition, Artificial Intelligence, vol. 15, no.
4, pp. 659-673, 2001.
A. Mitra, “An Offline Verifiaction Scheme of Skilled Handwritten
Forgery Documents using Pressure Characteristics,” IETE Journal of
Research, vol. 50, no. 2, pp. 141-145, April 2004.
J. K. Guo, D. Doermann and A. Rosenfeld, “Local Correspondence
for Detecting Random Forgeries,” in Proc. 4th IAPR Conf. Document
Analysis, Recognition, Ulm, Germany, 1997, pp. 319-323.
F. Leclerc and R. Plamondon, “Automatic Signature Verification: the
state of the art – 1989-1993,” Int. J. Pattern Recognition, Artificial
Intelligence, vol. 8, pp. 3-20, 1994.
M. Ammar, “Progress in Verification of Skillfully Simulated Handwritten Signatures,” Int. J. Pattern Recognition, Artificial Intelligence, vol.
5, pp.337-351, 1991.
M. Ammar, Y. Yoshida and T. Fukumura, “A New Effective Approach
for Automatic Off-line Verification of Signatures by using Pressure Features,” in Proc. 8th Int. Conf. Pattern Recognition, Paris, 1986, pp.
566-569.
R. N. Nagel and A. Rosenfeld, “Computer detection of freehand forgeries,” IEEE Trans. Computers, vol. 26, pp. 895-905, 1977.
R. Baron and R. Plamondon, “Acceleration Measurement with an Instrumented Pen for Signature Verification and Handwriting Analysis,”
IEEE Trans. Instrument., Measurement, vol. 38, pp. 1132-1138, 1989.
R. Sabourin and R. Plamondon, “Preprocessing of handwritten signatures from image gradient analysis,” in Proc. 8th Int. Conf. Pattern
Recognition, Paris, 1986, pp. 576-579.
47