This document discusses a proposed method for offline signature verification called OSPCV (Off-line Signature Verification using Principal Component Variances). The method extracts two features from signatures - pixel density and center of gravity distance. It then uses Principal Component Analysis to analyze the features and train a model using signature samples. When a new "test" signature is analyzed, it extracts the same two features and compares them to the trained model to determine if the signature is genuine or a forgery. The researchers believe this method provides better accuracy than existing offline signature verification systems, especially in differentiating between genuine and skilled forgery signatures. It aims to overcome challenges from intra-personal and inter-personal signature variations.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A comparative study of biometric authentication based on handwritten signatureseSAT Journals
Abstract With the increasing concerns for security, automated systems for authorization and authentication have become enormously important in every sector today. There are many methods for personal identification such as smart cards, PIN (personal Identification Number), passwords, etc. Regardless of the efficiency and accuracy of these systems, these systems can be always be stolen, lost, forgotten, cracked, hacked, etc. And it is for this reason biometric based authentication system have gained a lot of importance worldwide. A biometric system is essentially a pattern-recognition system that recognizes a person based on a feature vector derived from a specific physiological (face, iris, retina, voice, palm prints, hand geometry) or behavioral characteristic (signature, voice, keystroke pattern) that the person possesses. This system is more accurate as these characteristics are unique for a particular person and vary almost negligibly over time. In this paper we have presented a comparative study of recent advances in biometric authentication based on mainly offline Hand-written signatures. Keywords:- Biometrics, online and offline signature verification, authentication, feature extraction, region of interest (ROI), Artificial Neural Network.
A Review on Robust identity verification using signature of a personEditor IJMTER
Signature is behavioural type biometrics characteristics of human. Signature has been a
distinguishing feature for person identification. In these days increasing number of transactions,
especially related to financial and business are being authorized via signatures. Two types of
verification methods are: Offline signature verification and online signature verification. In this paper
we review various components of offline signature reorganization and verification system, feature
extraction techniques and available techniques.
INTRO:
Nowadays, person identification (recognition) and verification is very important in security and resource access control.
Biometrics is the science of automatic recognition of individual depending on their physiological and behavioral attributes.
For centuries, handwritten signatures have been an integral part of validating business transaction contracts and agreements.
Among the different forms of biometric recognition systems such as
fingerprint, iris, face, voice, palm etc., signature will be most widely used.
SIGNATURE RECOGNITION
Signature Recognition is the procedure of determining to whom a particular signature belongs to.
Depending on acquiring of signature images, there are two types of signature recognition systems:
Online Signature Recognition
Offline Signature Recognition
STEPS
IMAGE ACQUSITION
Collection of signatures from 50 persons on blank paper.
The collected signatures are scanned to get images in JPG format to create database.
PREPROCESSING
Image pre-processing is a technique to enhance raw images received from cameras/sensors placed on satellites, space probes and aircrafts or pictures taken in normal day-to-day life for various applications.
The techniques for preprocessing used are
RGB to Gray Scale Conversion
Binarization
Thinning
Bounding Box
FEATURE EXTRACTION
Features are the characters to be extracted from the processed image.
It has used two feature techniques
Global Features
Grid Features
DWT
After applying DWT to all 9 blocks, each block is divided into horizontal, vertical and diagonal components. From each components two features mainly horizontal and vertical projection positions are extracted. Total 54 (9 x 3 x 2) features are extracted.
Grid features extracted from each block are
Horizontal Projection Position
Vertical Projection Position
Algorithm for Training phase
Description: Retrieval of a signature image from a database
Input: Training sample images.
Output: Construction of Back Propagation Neural Network.
Begin
Read the training samples images
Step1: Pre-processing
Convert the image into gray scale image.
Convert the gray scale image into binary image.
Apply thinning process.
Apply bounding box.
Step 2: Features Extracted.
Step 3: Back propagation neural network training.
end // end of proposed algorithm
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A comparative study of biometric authentication based on handwritten signatureseSAT Journals
Abstract With the increasing concerns for security, automated systems for authorization and authentication have become enormously important in every sector today. There are many methods for personal identification such as smart cards, PIN (personal Identification Number), passwords, etc. Regardless of the efficiency and accuracy of these systems, these systems can be always be stolen, lost, forgotten, cracked, hacked, etc. And it is for this reason biometric based authentication system have gained a lot of importance worldwide. A biometric system is essentially a pattern-recognition system that recognizes a person based on a feature vector derived from a specific physiological (face, iris, retina, voice, palm prints, hand geometry) or behavioral characteristic (signature, voice, keystroke pattern) that the person possesses. This system is more accurate as these characteristics are unique for a particular person and vary almost negligibly over time. In this paper we have presented a comparative study of recent advances in biometric authentication based on mainly offline Hand-written signatures. Keywords:- Biometrics, online and offline signature verification, authentication, feature extraction, region of interest (ROI), Artificial Neural Network.
A Review on Robust identity verification using signature of a personEditor IJMTER
Signature is behavioural type biometrics characteristics of human. Signature has been a
distinguishing feature for person identification. In these days increasing number of transactions,
especially related to financial and business are being authorized via signatures. Two types of
verification methods are: Offline signature verification and online signature verification. In this paper
we review various components of offline signature reorganization and verification system, feature
extraction techniques and available techniques.
INTRO:
Nowadays, person identification (recognition) and verification is very important in security and resource access control.
Biometrics is the science of automatic recognition of individual depending on their physiological and behavioral attributes.
For centuries, handwritten signatures have been an integral part of validating business transaction contracts and agreements.
Among the different forms of biometric recognition systems such as
fingerprint, iris, face, voice, palm etc., signature will be most widely used.
SIGNATURE RECOGNITION
Signature Recognition is the procedure of determining to whom a particular signature belongs to.
Depending on acquiring of signature images, there are two types of signature recognition systems:
Online Signature Recognition
Offline Signature Recognition
STEPS
IMAGE ACQUSITION
Collection of signatures from 50 persons on blank paper.
The collected signatures are scanned to get images in JPG format to create database.
PREPROCESSING
Image pre-processing is a technique to enhance raw images received from cameras/sensors placed on satellites, space probes and aircrafts or pictures taken in normal day-to-day life for various applications.
The techniques for preprocessing used are
RGB to Gray Scale Conversion
Binarization
Thinning
Bounding Box
FEATURE EXTRACTION
Features are the characters to be extracted from the processed image.
It has used two feature techniques
Global Features
Grid Features
DWT
After applying DWT to all 9 blocks, each block is divided into horizontal, vertical and diagonal components. From each components two features mainly horizontal and vertical projection positions are extracted. Total 54 (9 x 3 x 2) features are extracted.
Grid features extracted from each block are
Horizontal Projection Position
Vertical Projection Position
Algorithm for Training phase
Description: Retrieval of a signature image from a database
Input: Training sample images.
Output: Construction of Back Propagation Neural Network.
Begin
Read the training samples images
Step1: Pre-processing
Convert the image into gray scale image.
Convert the gray scale image into binary image.
Apply thinning process.
Apply bounding box.
Step 2: Features Extracted.
Step 3: Back propagation neural network training.
end // end of proposed algorithm
Handwritten Signature Verification using Artificial Neural NetworkEditor IJMTER
This paper reviews various Signature Verification approaches; various feature sets,
various online databases and types of features. Processing on an online database, post extracting a
combination of global and local features onto a signature as an image, using MultiLayer Perceptron Feed
Forward Network alongwith Back Propogation Algorithm for training is proposed to classify a genuine
and forged (random, simple and skilled) offline signatures.
Keystroke dynamics, or typing dynamics, is the detailed timing information that describes exactly when each key was pressed and when it was released as a person is typing at a computer keyboard.
IOSR Journal of Mechanical and Civil Engineering (IOSR-JMCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of mechanical and civil engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in mechanical and civil engineering. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Fraud Detection Using Signature RecognitionTejraj Thakor
The signature of person is an important bio metric of a human being which can be used to authenticate human identity. The problem arises when someone decide to imitate our signature and steal our identity.
The Image of human signature is collected by camera of mobile phone which can extract dynamic and spatial information of the signature based on Image processing techniques like Convert to gray scale, Noise Removal, Normalization, Border Elimination and Feature Extraction techniques.
The signature matching is depending on SVM. The SVM classifier is trained with sample images in database obtained from those individuals whose signatures have to be authenticated by the system. In our proposed system SQLite database as a back-end and Android platform as a front-end.
The use of fingerprints in a voting system for registration and authentication
application has its limitations. Among these limitations are mismatches caused by
disparity in fingerprint trait and templates of voters taken at the point of registration
and at the point of authentication (voter’s accreditation). Manual labour, aging,
variations in user interaction (i.e. pressure on the scanner), environmental changes
and injuries are a few of the factors that can cause these disparities. The iris is more
resistant to these factors that cause disparity in biometrics. In this designed model, the
iris was used in place of fingerprints as the biometric measure to register and
authenticate voters. An iris scanner obtains the voter’s iris image, segments and
digitizes it. The digitized iris image of the voter is used as a training data and stored
in the template. This template is stored together with the voter’s particulars in a
database. An algorithm design using the C# (C sharp) language issues a PIN for the
voter’s authentication. At the point of authentication, the PIN of the voter is keyed in.
The iris scanner obtains the voter’s iris image, generates a template of the iris and
with the aid of the system’s embedded algorithm, compares the details of the voter’s
pin and iris trait with the one in the database for a match. A match grants the voter
the pass to vote. A mismatch denies the voter access to the voting system. This
implemented Iris Recognition Technology drastically reduces the chances of
mismatches for genuine voters and denies imposters in the voting system due to its
reliability and robustness as revealed by the tests carried out on the designed model.
Highly Secured Bio-Metric Authentication Model with Palm Print IdentificationIJERA Editor
For securing personal identifications and highly secure identification problems, biometric technologies will
provide higher security with improved accuracy. This has become an emerging technology in recent years due to
the transaction frauds, security breaches and personal identification etc. The beauty of biometric technology is it
provides a unique code for each person and it can’t be copied or forged by others. To overcome the draw backs
of finger print identification systems, here in this paper we proposed a palm print based personal identification
system, which is a most promising and emerging research area in biometric identification systems due to its
uniqueness, scalability, faster execution speed and large area for extracting the features. It provides higher
security over finger print biometric systems with its rich features like wrinkles, continuous ridges, principal
lines, minutiae points, and singular points. The main aim of proposed palm print identification system is to
implement a system with higher accuracy and increased speed in identifying the palm prints of several users.
Here, in this we presented a highly secured palm print identification system with extraction of region of interest
(ROI) with morphological operation there by applying un-decimated bi-orthogonal wavelet (UDBW) transform
to extract the low level features of registered palm prints to calculate its feature vectors (FV) then after the
comparison is done by measuring the distance between registered palm feature vector and testing palm print
feature vector. Simulation results show that the proposed biometric identification system provides more
accuracy and reliable recognition rate
HMM-Based Face Recognition System with SVD Parameterijtsrd
Today an increasing digital world, personal reliable authentication has become an important human Computer interface activity. It is very important to establish a persons identity. In today existing security mainly depends on passwords, swipe cards or token based approach and attitude to control access to physical and virtual spaces passport. Universal, such as methods, although very secure. Such as tokens, badges and access cards can be shared or stolen. Passwords and PIN numbers can be also stolen electronically. In addition, they cannot distinguish between authentic have access to or knowledge of the user and tokens. To make a system more secure and simple with the use of biometric authentication system such as face and hand gesture recognition for personal authentication. So in this paper, A Hidden Markov Model (HMM) based face recognition system using Singular Value Decomposition (SVD) is proposed. Neha Rana | Bhavna Pancholi"HMM-Based Face Recognition System with SVD Parameter" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-4 , June 2018, URL: http://www.ijtsrd.com/papers/ijtsrd12938.pdf http://www.ijtsrd.com/engineering/electrical-engineering/12938/hmm-based-face-recognition-system-with-svd-parameter/neha-rana
Handwritten Signature Verification using Artificial Neural NetworkEditor IJMTER
This paper reviews various Signature Verification approaches; various feature sets,
various online databases and types of features. Processing on an online database, post extracting a
combination of global and local features onto a signature as an image, using MultiLayer Perceptron Feed
Forward Network alongwith Back Propogation Algorithm for training is proposed to classify a genuine
and forged (random, simple and skilled) offline signatures.
Keystroke dynamics, or typing dynamics, is the detailed timing information that describes exactly when each key was pressed and when it was released as a person is typing at a computer keyboard.
IOSR Journal of Mechanical and Civil Engineering (IOSR-JMCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of mechanical and civil engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in mechanical and civil engineering. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Fraud Detection Using Signature RecognitionTejraj Thakor
The signature of person is an important bio metric of a human being which can be used to authenticate human identity. The problem arises when someone decide to imitate our signature and steal our identity.
The Image of human signature is collected by camera of mobile phone which can extract dynamic and spatial information of the signature based on Image processing techniques like Convert to gray scale, Noise Removal, Normalization, Border Elimination and Feature Extraction techniques.
The signature matching is depending on SVM. The SVM classifier is trained with sample images in database obtained from those individuals whose signatures have to be authenticated by the system. In our proposed system SQLite database as a back-end and Android platform as a front-end.
The use of fingerprints in a voting system for registration and authentication
application has its limitations. Among these limitations are mismatches caused by
disparity in fingerprint trait and templates of voters taken at the point of registration
and at the point of authentication (voter’s accreditation). Manual labour, aging,
variations in user interaction (i.e. pressure on the scanner), environmental changes
and injuries are a few of the factors that can cause these disparities. The iris is more
resistant to these factors that cause disparity in biometrics. In this designed model, the
iris was used in place of fingerprints as the biometric measure to register and
authenticate voters. An iris scanner obtains the voter’s iris image, segments and
digitizes it. The digitized iris image of the voter is used as a training data and stored
in the template. This template is stored together with the voter’s particulars in a
database. An algorithm design using the C# (C sharp) language issues a PIN for the
voter’s authentication. At the point of authentication, the PIN of the voter is keyed in.
The iris scanner obtains the voter’s iris image, generates a template of the iris and
with the aid of the system’s embedded algorithm, compares the details of the voter’s
pin and iris trait with the one in the database for a match. A match grants the voter
the pass to vote. A mismatch denies the voter access to the voting system. This
implemented Iris Recognition Technology drastically reduces the chances of
mismatches for genuine voters and denies imposters in the voting system due to its
reliability and robustness as revealed by the tests carried out on the designed model.
Highly Secured Bio-Metric Authentication Model with Palm Print IdentificationIJERA Editor
For securing personal identifications and highly secure identification problems, biometric technologies will
provide higher security with improved accuracy. This has become an emerging technology in recent years due to
the transaction frauds, security breaches and personal identification etc. The beauty of biometric technology is it
provides a unique code for each person and it can’t be copied or forged by others. To overcome the draw backs
of finger print identification systems, here in this paper we proposed a palm print based personal identification
system, which is a most promising and emerging research area in biometric identification systems due to its
uniqueness, scalability, faster execution speed and large area for extracting the features. It provides higher
security over finger print biometric systems with its rich features like wrinkles, continuous ridges, principal
lines, minutiae points, and singular points. The main aim of proposed palm print identification system is to
implement a system with higher accuracy and increased speed in identifying the palm prints of several users.
Here, in this we presented a highly secured palm print identification system with extraction of region of interest
(ROI) with morphological operation there by applying un-decimated bi-orthogonal wavelet (UDBW) transform
to extract the low level features of registered palm prints to calculate its feature vectors (FV) then after the
comparison is done by measuring the distance between registered palm feature vector and testing palm print
feature vector. Simulation results show that the proposed biometric identification system provides more
accuracy and reliable recognition rate
HMM-Based Face Recognition System with SVD Parameterijtsrd
Today an increasing digital world, personal reliable authentication has become an important human Computer interface activity. It is very important to establish a persons identity. In today existing security mainly depends on passwords, swipe cards or token based approach and attitude to control access to physical and virtual spaces passport. Universal, such as methods, although very secure. Such as tokens, badges and access cards can be shared or stolen. Passwords and PIN numbers can be also stolen electronically. In addition, they cannot distinguish between authentic have access to or knowledge of the user and tokens. To make a system more secure and simple with the use of biometric authentication system such as face and hand gesture recognition for personal authentication. So in this paper, A Hidden Markov Model (HMM) based face recognition system using Singular Value Decomposition (SVD) is proposed. Neha Rana | Bhavna Pancholi"HMM-Based Face Recognition System with SVD Parameter" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-4 , June 2018, URL: http://www.ijtsrd.com/papers/ijtsrd12938.pdf http://www.ijtsrd.com/engineering/electrical-engineering/12938/hmm-based-face-recognition-system-with-svd-parameter/neha-rana
Seismic Study and Spatial Variation of b-value in Northeast IndiaIOSR Journals
Study of recent seismicity and b-value estimation by Least Square and Maximum Likelihood Estimation methods in five tectonic blocks of Northeast India demarcates indo Burma Belt, Main Central Thrust, Main Boundary Thrust, Shilling Plateau, Mikir Hills and Kopili Lineament as active seismic source of the region. Spatial variation of b-value is observed by dividing the study area into 10×10 grids. Higher b-value contours depict the highly seismic area with structural heterogeneity, while lower b-value contours indicate the areas under high stress. b-values are observed in the range of 0.437 - 0.908 and mostly concentrated around 0.7, indicating high stress accumulation.
Documentaries use for the design of learning activitiesIOSR Journals
Abstract: Documentaries used in the training field constitute the rich sources of information. They have the
advantage to associate the elements of knowledge with events which request the episodic memory of the
learner.Thus, these documentaries increase the probability of retention of knowledge they convey. However, the
logical sequence of knowledge does not guarantee an efficient construction which can be mobilized in situations
of action. In this paper, we seek how to benefit from the potential of these documentaries to promote the
construction and mobilization of knowledge by the learner in an elearning platform. Particularly, we propose a
method allowing segmenting the contents of a documentary to design learning activities. Based on a case study
related to the field of mechatronics, we begin by segmenting the content of a documentary in terms of elements
of knowledge (facts, concepts, procedures, and rules) then we connect with each of these problems they seek to
find answers. We reorganize learning activities to promote the acquisition and mobilization of knowledge by the
learner. We conclude by proposing a pedagogical scenario to implement these activities in elearning platform.
Keywords: LMS, IMS-LD, Documentary, Design, Elearning
Signature verification based on proposed fast hyper deep neural networkIAESIJAI
Many industries have made widespread use of the handwittern signature verification system, including banking, education, legal proceedings, and criminal investigation, in which verification and identification are absolutely necessary. In this research, we have developed an accurate offline signature verification model that can be used in a writer-independent scenario. First, the handwitten signature images went through four preprocessing stages in order to be suitable for finding the unique features. Then, three different types of features namely principal component analysis (PCA) as appearance-based features, gray-level co-occurrence matrix (GLCM) as texture-features, and fast Fourier transform (FFT) as frequency-features are extracted from signature images in order to build a hybrid feature vector for each image. Finally, to classify signature features, we have designed a proposed fast hyper deep neural network (FHDNN) architecture. Two different datasets are used to evaluate our model these are SigComp2011, and CEDAR datasets. The results collected demonstrate that the suggested model can operate with accuracy equal to 100%, outperforming several of its predecessors. In the terms of (precision, recall, and F-score) it gives a very good results for both datasets and exceeds (1.00, 0.487, and 0.655 respectively) on Sigcomp2011 dataset and (1.00, 0.507, and 0.672 respectively) on CEDAR dataset.
A Novel Automated Approach for Offline Signature Verification Based on Shape ...Editor IJCATR
The handwritten signature has been the most natural and long lasting authentication scheme in which a person draw some
pattern of lines or writes his name in a different style. The signature recognition and verification are a behavioural biometric and is
very challenging due to the variation that can occur in person’s signature because of age, illness, and emotional state of the person. As
far as the representation of the signature is concerned a classical technique of thinning or skeleton is mostly used. In this paper, we
proposed a new methodology for signature verification that uses structural information and original strokes instead of skeleton or
thinned version to analyse the signature and verify. The approach is based on sketching a fixed size grid over the signatures and getting
2-Dimensional unique templates which are then compared and matched to verify a query signature as genuine or forged. To compute
the similarity score between two signature’s grids, we follow template matching rule and the Signature grid’s cell are mapped and
matched with respect to position. The proposed framework is fast and highly accurate with reduce false acceptance rate and false
Freeman Chain Code (FCC) Representation in Signature Fraud Detection Based On...CSCJournals
This paper presents a signature verification system that used Freeman Chain Code (FCC) as directional feature and data representation. There are 47 features were extracted from the signature images from six global features. Before extracting the features, the raw images were undergoing pre-processing stages which were binarization, noise removal by using media filter, cropping and thinning to produce Thinned Binary Image (TBI). Euclidean distance is measured and matched between nearest neighbours to find the result. MCYT-SignatureOff-75 database was used. Based on our experiment, the lowest FRR achieved is 6.67% and lowest FAR is 12.44% with only 1.12 second computational time from nearest neighbour classifier. The results are compared with Artificial Neural Network (ANN) classifier.
Offline Handwritten Signature Identification and Verification using Multi-Res...CSCJournals
In this paper, we are proposing a new method for offline (static) handwritten signature identification and verification based on Gabor wavelet transform. The whole idea is offering a simple and robust method for extracting features based on Gabor Wavelet which the dependency of the method to the nationality of signer has been reduced to its minimal. After pre-processing stage that contains noise reduction and signature image normalisation by size and rotation, a virtual grid is placed on the signature image. Gabor wavelet coefficients with different frequencies and directions are computed on each points of this grid and then fed into a classifier. The shortest weighted distance has been used as the classifier. The weight that is used as the coefficient for computing the shortest distance is based on the distribution of instances in each of signature classes. As it was pointed out earlier, one of the advantages of this system is its capability of signature identification and verification of different nationalities; thus it has been tested on four signature dataset with different nationalities including Iranian, Turkish, South African and Spanish signatures. Experimental results and the comparison of the proposed system with other systems are consistent with desirable outcomes. Despite the use of the simplest method of classification i.e. the nearest neighbour, the proposed algorithm in comparison with other algorithms has very good capabilities. Comparing the results of our system with the accuracy of human\'s identification and verification, it shows that human identification is more accurate but our proposed system has a lower error rate in verification.
OFFLINE SIGNATURE VERIFICATION SYSTEM FOR BANK CHEQUES USING ZERNIKE MOMENTS,...ijaia
Handwritten signature is the most accepted and economical means of personnel authentication. It can be
verified using online or offline schemes. This paper proposes a signature verification model by combining
Zernike moments feature with circularity and aspect ratio. Unlike characters, signatures vary each time
because of its behavioural biometric property. Signatures can be identified based on their shape. Moments
are the good translational and scale invariant shape descriptors. The amplitude and the phase of Zernike
moments, circularity and aspect ratio of the signature are the features that are extracted and combined for
the verification purpose and are fed to the Feedforward Backpropagation Neural Network. This Neural
Network classifies the signature into genuine or forged. Experimental results reveal that this methodology
of combining zernike moments along with the two mentioned geometrical properties give higher accuracy
than using them individually. The combination of these feature vector yields a mean accuracy of 95.83%.
When this approach is compared with the literature, it proves to be more effective.
An offline signature verification using pixels intensity levelsSalam Shah
Offline signature recognition has great importance in our day to day activities. Researchers are trying to use them as biometric identification in various areas like banks, security systems and for other identification purposes. Fingerprints, iris, thumb impression and face detection based biometrics are successfully used for identification of individuals because of their static nature. However, people’s signatures show variability that makes it difficult to recognize the original signatures correctly and to use them as biometrics. The handwritten signatures have importance in banks for cheque, credit card processing, legal and financial transactions, and the signatures are the main target of fraudulence. To deal with complex signatures, there should be a robust signature verification method in places such as banks that can correctly classify the signatures into genuine or forgery to avoid financial frauds. This paper, presents a pixels intensity level based offline signature verification model for the correct classification of signatures. To achieve the target, three statistical classifiers; Decision Tree (J48), probability based Naïve Bayes (NB tree) and Euclidean distance based k-Nearest Neighbor (IBk), are used.
For comparison of the accuracy rates of offline signatures with online signatures, three classifiers were applied on online signature database and achieved a 99.90% accuracy rate with decision tree (J48), 99.82% with Naïve Bayes Tree and 98.11% with K-Nearest Neighbor (with 10 fold cross validation). The results of offline signatures were 64.97% accuracy rate with decision tree (J48), 76.16% with Naïve Bayes Tree and 91.91% with k-Nearest Neighbor (IBk) (without forgeries). The accuracy rate dropped with the inclusion of forgery signatures as, 55.63% accuracy rate with decision tree (J48), 67.02% with Naïve Bayes Tree and 88.12% (with forgeries).
RELATIVE STUDY ON SIGNATURE VERIFICATION AND RECOGNITION SYSTEMAM Publications
Signature verification is amongst the first few biometrics to be used for verification and one of the natural
ways of authenticating a person’s identity. The user introduces into the computer the scanned images of the signature,
then after image enhancement and reduction of noise of the image. Followed by feature extraction and neural network
training images of signature are verified. Yet now thousands of financial and business transactions are being
authorized via signatures. Therefore an automatic signature verification system is needed. This paper represents a brief
review on various approaches based on different datasets, features and training techniques used for verification.
Comprehensive Review of Offline Signature Verification Mechanismsijtsrd
One of the oldest and most well known biometric testifying procedures in modern culture is the authentication of handwritten signatures. The field is divided into areas that operate online and offline depending on the acquisition procedure. In online signature verification, the entire signing procedure is carried out using some sort of acquisition equipment, whereas offline signature verification just uses scanned photographs of the signatures. In this paper, we propose an image based offline signature realization and verification system. Support Vector Machine and artificial neural network are both employed to support the goal intended for this thesis. Modern better processes for features extraction are presented. Two independent sequential neural networks are created, one for verifying and the other for recognizing signatures i.e. for detecting forgery . A recognition network regulates the parameters of the verification network, which are generated separately for each signature. A signature code and acceptable dataset are used to rigorously validate the Systems overall performance. Shilpee Agrawal | Dr. Mohd Ahmed "Comprehensive Review of Offline Signature Verification Mechanisms" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-6 | Issue-6 , October 2022, URL: https://www.ijtsrd.com/papers/ijtsrd51950.pdf Paper URL: https://www.ijtsrd.com/engineering/computer-engineering/51950/comprehensive-review-of-offline-signature-verification-mechanisms/shilpee-agrawal
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Offline signature identification using high intensity variations and cross ov...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Offline Handwritten Signature Verification using Neural Networkijiert bestjournal
The different biometric techniques have been discussed for ident ification. Such as face reading,fingerprint recognition and retina scanning and these are known as vision based i dentification. There are non vision based identifications such as signature verification and the voice recogni tion. Signature verification plays a vital role in the field of the financial,commercial and for the legal matters. Signature by any person considered as the approval for any work so the signature is the preferred authenticat ion. In this paper signature verification is done by means of image processing,geometric feature extraction and by using neural network technique.
Reduction of False Acceptance Rate Using Cross Validation for Fingerprint Rec...IJTET Journal
Abstract— In the field of biometric modality fingerprint is considered to be one of the most widely used method for individual identity. The fingerprint authentication is used in most application for security purpose. In the biometric systems, the input images are binarized and feature is extraction. The Minutiae matching in fingerprint identification is done by identifying the minutiae point of interest and their relationship. The validation testing in the proposed system using the method of K- fold cross validation by using two , a training set and test set of images to find the appropriate image that matches the input image ,increase the accuracy of recognition by reducing the false acceptance rate of the system.
Automatic signature verification with chain code using weighted distance and ...eSAT Journals
Abstract The signature forgery can be restricted by either online or offline signature verification techniques. It verifies the signature by
performing a match with the pre-processed signature dynamically by detecting the motion of stylus during signature while on
other hand, offline verifies by performing a match using the two dimensional scanned image of the signature. This paper studies
about the various techniques available in offline signature verification along with their shadows.
Keywords: Signature Verification, Weighted Distance, High Pressure Factor, Normalization, Threshold Value
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Overlapped Fingerprint Separation for Fingerprint AuthenticationIJERA Editor
Overlapped fingerprints captured at the crime scene plays significant role as an evidence to capture the criminals. As latent fingerprints are the accidently left skin impressions, so these are found to be with broken ridge composition, overlapped patterns and spoiled minutiae information. The Graphical User Interface (GUI) system is developed by using MATLAB R2015a software. This project also includes the development of standalone program for this system. The main purpose of GUI development is to get the value of real end points and real-branch points of a overlapped fingerprint image. The value of this point is used in fingerprint image matching process to identify the owner of an overlapped fingerprint image. The image enhancement consists of several process such as histogram equalization process, enhancement by Fast Fourier Transform (FFT) factor, and image binarization while minutiae extraction consist of ridge thinning process, region of interest (ROI) extraction, and minutiae extraction process. All processes should be done one by one.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
How world-class product teams are winning in the AI era by CEO and Founder, P...
B017150823
1. IOSR Journal of Computer Engineering (IOSR-JCE)
e-ISSN: 2278-0661,p-ISSN: 2278-8727, Volume 17, Issue 1,Ver. V (Jan – Feb. 2015), PP 08-23
www.iosrjournals.org
OSPCV: Off-line Signature Verification using Principal
Component Variances
Arunalatha J S1
, Prashanth C R2
, Tejaswi V3
, Shaila K1
, K B Raja1
,
Dinesh Anvekar4
, Venugopal K R1
, S S Iyengar5
, L M Patnaik6
,
Arun C7
, Pawan K S8
1
University Visvesvaraya College of Engineering, Bangalore University, Bangalore, India
2
Dr. Ambedkar Institute of Technology, Bangalore, India
3
National Institute of Technology, Surathkal, Karnataka, India
4
Nitte Meenakshi Institute of Technology, Bangalore, India
5
Florida International University, Miami, Florida, USA
6
Indian Institute of Science, Bangalore, India
Abstract: Signature verification system is always the most sought after biometric verification system. Being a
behavioral biometric feature which can always be imitated, the researcher faces a challenge in designing such a
system, which has to counter intrapersonal and interpersonal variations. The paper presents a comprehensive
way of off-line signature verification based on two features namely, the pixel density and the centre of gravity
distance. The data processing consists of two parallel processes namely Signature training and Test signature
analysis. Signature training involves extraction of features from the samples of database and Test signature
analysis involves extraction of features from test signature and it’s comparison with those of trained values from
database. The features are analyzed using Principal Component Analysis (PCA). The proposed work provides a
feasible result and a notable improvement over the existing systems.
Keywords: Biometrics, Centre of gravity distance, Off-line signature verification, Pixel density, Principal
Component.
I. Introduction
Identification of individuals is a very important aspect of security. The identification techniques opted
may vary according to conveniences and requirements. The identification may be carried out by identity cards,
pin codes, smart cards etc., but these are easily misused. A better way of individual identification is on a
biological scale which is biometric verification. The biometric verification involves identification of individuals
based on physiological and behavioral features. The physiological features include iris, face, finger print, DNA
etc., and the behavioral features include voice, signature, gait which are unique to a person.
Signatures have been a primary method of identification of a person in all fields for purposes such as
credit cards, contract agreements, cheques, wills, and other important documents. Thus a signature is widely
used behavioral biometric for identifying a person. In day to day life millions of signatures need to be verified;
this tends to be impossible by visual inspection and therefore an automated system is necessary for determining
the authenticity of the signature. Several decades have witnessed intense research in the field of signature
verification, especially in the Off-line signature verification. Signature verification is a process of discriminating
between genuine and forged set of handwritten signatures and it is a difficult task as the signatures are a result of
the physical and psychological status of an individual process.
Several techniques including different features of signature have been developed for the purpose of
signature verification. The signature verification can be of two main types, the on-line or dynamic verification
system and the off-line or static verification system. The main steps of signature verification system are
preprocessing, thinning, feature extraction, and verification. Feature selection and extraction are fundamental
processes in any verification system. The features used in the off-line signature verification are signature image
area, length to width ratio, geometric centers, angle and distance of a pixel from a reference point, signature
height and width. On-line signature verification system has features such as pen pressure, tilt, velocity, number
of strokes required etc. The feature sets provide an ambiguous performance; hence signature verification has
become a challenging task.
Off-line verification of signatures is done by considering an image of the signature which is obtained
by using a scanner or a camera and extracting its features. Since the signature is scanned from a paper, it is
considered as a static image. Off-line signature verification is difficult due to limited amount of features which
can be extracted and the absence of dynamic features. Thus Off-line global and local features are extracted from
the original signature and fed into the system and are later compared with test signatures using various
DOI: 10.9790/0661-17150823 www.iosrjournals.org 8 | Page
2. OSPCV: Off-line Signature Verification using Principal Component Variances
comparison techniques such as Support Vector Machine, Neural Networks, Hidden Markov Models, Time
warping, Principal Component Analysis (PCA) etc.
In the decision making phase, the forged images can be classified in three groups: (a) random, (b) simple,
and (c) skilled. Random forgeries are formed without any knowledge of the signer‟s name and signature‟s shape.
Simple forgeries are produced knowing the name of the signer but without having an example of signer‟s signature.
Skilled forgeries are produced by people looking at an original instance of the signature, attempting to imitate as
closely as possible. The disadvantages of on-line verification are : (i) heavy computational load and (ii) warping
forgeries. The disadvantages of off-line verification are: (i) the signature can be easily forged as compare to on-line
signature and (ii) features like pen pressure, and velocity cannot be acquired.
As mentioned earlier there are three important steps in signature verification; they are Preprocessing,
Feature extraction and Data comparison. Preprocessing can be performed in various ways. Preprocessing is
carried out to make the data extraction and the data verification process easier and efficient. The various
preprocessing methods are binarization, background elimination, noise reduction, width normalization, thinning,
rotation normalization, smoothing, and size normalization. The next process is the Feature extraction. Many
local and global features are extracted from the preprocessed signature image and a database is created using the
various learning and comparison techniques. Comparison is performed by extracting the features from the test
signature and by comparing it with those of the originals using techniques such as correlation, analysis of
variance, measuring Euclidean and Hamming distance etc,.
Motivation: Identities of individuals have been depicted and forged since times unknown. Previously
the seals of kings and important people were forged and with advancement of time and technology, the modern
biometric features are also prone to forgery. But the most misused of all the biometric features is the signature.
Signatures are being widely used in the banking and legal purposes and this necessitates verification of
thousands of signatures everyday which are used on cheques, wills, etc. This process by manual means is an
unachievable task since the processing speed does not meet the demand and the security is diluted. This provides
means for various types of fraudulent activities and has motivated to come up with an innovative off-line
signature verification system.
Contribution: In this paper, Offline Signature Verification using Principal Component Variances is
proposed. The system could be efficiently used for real time personal identification applications. The proposed
OSPCV produces better Equal Error Rate by effectively differentiating between genuine and forgery signature
samples. The algorithm overcomes the intra and inters signature variations and provides better Equal Error Rate
(EER). Organization of the paper: The following sections hereon are organized as follows. The related work is
presented in section II. We discuss about the background work in section III. The proposed signature
verification model is described in section IV. The OSPCV algorithm is described in section V. The experimental
results and the performance analysis are presented in section VI and the section VII contains the conclusion.
II. Related Work
Mustafa et al., [1] proposed an off-line signature verification system by considering four main features:
Pixel density, Centre of gravity, Angle and Distance. The analysis is based on a technique called ANOVA
(Analysis of Variance). The results also show that the combination of centre of gravity and pixel density
features is the best for distinguishing between genuine and skilled forgeries.
Prakash and Guru, [2] developed a method for symbolic representation of off-line signatures based on
relative distances between centroids. Distances between centroids of off-line signatures are used to form an
interval valued symbolic feature vector for representing signatures. Similar signatures are clustered in each class
and the cluster based symbolic representation for signature verification is also investigated.
Ismail et al., [3] proposed an off-line signature verification model using an Artificial Neural Network.
Before the extraction of features, the pre-processing stage of noise removal and normalization is performed to
prepare the signature so that the features can be extracted. The features which are extracted are moment features
which are global shape characteristics described by moment, grey-scale co-occurrence matrices which are
matrices of size N*N. When the size of the matrix is too large for direct use, measurements such as
homogeneity, contrast, entropy and energy are used. The Principal Component Analysis is used for feature
extraction and a database is created. The originality of the test signature is verified by making use of the
Artificial Neural Network for comparison.
Blankers et al., [4] proposed that the participants were given the liberty of using two kinds of datasets
that is off-line datasets that contained only the statistical data and the on-line datasets contained both statistical
as well as the dynamic data hence a vast number of signatures were available for analysis. All the signatures
were stored previously in the systems, the on-line files were saved as the text documents and the off-line files
were saved as the PNG images. The markings for the teams were decided on the basis of the binary codes that is
0 for a wrong match and 1 for the correct match. and the programs used for evaluations were Linux or windows-
win32 command line and the standards for declaration of the results were already preset by the NFI
DOI: 10.9790/0661-17150823 www.iosrjournals.org 9 | Page
3. OSPCV: Off-line Signature Verification using Principal Component Variances
(Netherlands Forensic Institute) on the basis of the EER (Equal Error Rate), FRR (False Rejection Ratio) and
FAR (False Acceptance Ratio).
Vargas et al., [5] proposed a signature verification system based on analysis of pressure distribution in
signature. The pixel density is more if the pressure is more and vice versa, the pressure feature is captured in
form of pixel density. The technique used is called as Pseudo-Cepstral method. This method involves the
calculation of histogram of grey scale image and used as spectrum for calculation of Pseudo-Cepstral
coefficients. The Pseudo-Cepstral coefficients are used to estimate the unique minimum phase sequence. The
sequence is used as feature vector for signature verification. The optimal or most desirable Pseudo-Cepstral
coefficients are selected for best performance.
Ioana et al., [6] proposed an off-line signature verification model to extract a large number of features
from the scanned signature and including a couple of new distance based features. First, the image is scanned
and converted into a digital image and then edited to the dimension of 400*400 pixels. The noise is removed and
the image is binarized and skeletonised. The features extracted are global features which are of the five main
categories: extreme point position, number of pixels, histogram, pixel position and angular value. For
classification they have first considered two methods namely the Naive Bayes method and the Multilayer
Perceptron classifier. The Naive Bayes classifier is found to have more accuracy.
Vargas et al., [7] developed a system where in features representing the information of High Pressure
Points from a handwritten signature image are analyzed for off-line verification. An approach for determining
the high pressure threshold from grey scale images has been proposed. Two images are taken with one having
the high pressure points extracted and the other being a binary version of the original signature. They are
transformed to polar coordinates using which the pixel density ratio between them is calculated. The polar space
is divided into angular and radial segments using which the local analysis of the high pressure distribution is
done. Eventually two vectors having the density distribution ratio are calculated for nearest and farthest points
from geometric centre of the original signature image. Experiments have been carried out using a database with
160 people‟s signatures. The accuracy of system for simple forgeries is tested with KNN and PNN. KNN stands
for K-Nearest Neighbor, it is a technique to classify a new object by using its distances to the nearest
neighboring training samples in the feature space. Probabilistic Neural Network (PNN) is a 3-layer, feed-
forward, one pass training algorithm used for mapping and classifying the data.
Ismail et al., [8] have developed a method in which Fourier Descriptor and Chain Codes are used as
features for representing the signature image. Chain codes represent a boundary by a connected sequence of
straight-line segment of specified length and direction. Identification process is divided into two different sub-
processes, they are recognition and verification. The recognition process employs the Principle Component
Analysis and the verification process consists of a multilayer feed forward artificial neural network. Different
distances are measured between the chain code feature vectors to evaluate the results of recognition process.
Chen and Srihari [9] proposed the use of embedded deformable template model based on the
philosophy of the multi-resolution shape features, i.e., chain code processing and extrema extraction in order to
reduce the problem involved in graph matching. This processing was done using threshold based binarization
which was initially applied to the first image of the signature and chain code contour extraction was performed.
Another technique used for the purpose was measuring deformation by point to point matching. This was
performed by matching the end points of each of the letter in the reference signature. The thin-plate spline
mapping using a deformation template model was introduced which used thin-plate splines for two dimensional
interpolations. GSC (Gradient, Structural and Concavity) algorithm (Region Matching Version) was also
developed in order to measure the image characteristics at different scales especially for multi resolutional
signatures.
Emre et al., [10] proposed an off-line signature verification and recognition system based on global,
directional and grid features of signatures. The comparison was done using Support Vector Machines (SVM).
One against all approach is used for training signatures. The database consists of 1320 signatures taken from 40
persons. A total of 480 forged signatures are taken for testing. Testing is done in two different ways;
verification and recognition. Verification stage involves the decision about whether the signature is genuine or
forged. Recognition stage involves the process of finding the identification of the signature owner.
Banshider et al., [11] proposed a method using geometric centre approach for feature extraction.
Euclidean distance model was used as parameters for classification of signatures. Threshold selection is based
on average and standard deviations of the Euclidean distance. The method involves scanning the signature
image, centering it and later extracting the feature points by horizontal and vertical splitting. For increasing the
accuracy, especially in case of skilled forgeries, the split images or subparts are further split to smaller units
which achieve better accuracy. Piyush et al., [12] proposed a signature verification system based on Dynamic
Time Wrapping (DTW). The system involves extracting the vertical projection feature from the test signature
image and then comparing it with the data set using elastic matching. The database used for the purpose
DOI: 10.9790/0661-17150823 www.iosrjournals.org 10 | Page
4. OSPCV: Off-line Signature Verification using Principal Component Variances
consists of signatures of hundred persons. The system used consists of a modified discrete time wrapping
algorithm which captures a 1-dimensional vertical projection.
Yacoubi et al., [13] proposed off-line signature verification based on Hidden Markov Model approach
(HMM). The system automatically sets an optimal acceptation / rejection decision threshold for each signature.
The experiment is carried out on two databases called as DB-I and DB-II. DB-I consists of 40 signers each
contributing 40 signatures. The 40 signatures are divided into 2 groups; first group consists of 30 signatures used
for training and next 10 signatures for testing. DB-II consists of 60 signers each contributing 40 signatures
which are divided as above.
Simone et al., [14] initially worked on preprocessing work which involved operations like binarization,
noise reduction, skew detection and character thinning and then the layout analysis was performed. Later on
document analysis was done for performing the segmentation of documents. Pixel classification was initially
applied to the binarization of document images. Region and Page classifications were also performed. Further
improvements were done on the basis of the character segmentation which decomposed a sequence of characters
into individual symbols which was performed by identifying touching characters and location of the cutting
points. Analysis on OCR (Optical Character Recognition) and Word Recognition was performed for feature
extraction and learning algorithms. The most important characteristic of the research was Time Delay Neural
Networks which was used to deal with temporal sequences.
Robert et al., [15] proposed off-line signature verification based on directional Probability Density
Function (PDF) and completely connected feed forward Neural Networks (NN). The experiment is conducted
over a database containing 800 signature images signed by 20 individuals. The results were improved by using a
rejection criterion. The threshold adjustments have to be carried out manually to get an acceptable global error
rate and rejection rate.
Muhammad et al., [16] proposed off-line signature verification system based on a special type of
transform called Contourlet Transform (CT). They suggested that Contourlet transform can be used in feature
identification and feature extraction. The given signature was first pre-processed to remove noise and then was
re-sized according to the requirements. The modified signature was used to get the unique features by applying a
special form of Contourlet transform. The paper presents feature extraction based on Contourlet transform. This
method is helpful to verify signatures of different languages which are closely related by alphabets. False
Acceptance Rate (FAR), False Rejection Rate (FRR), Equal Error Rate (ERR) are considered as important
aspects of comparison and verification.
Stephane et al., [17] proposed that several steps are involved in verification of a signature which
includes conversion of the data into a portable bit map, boundary extraction and feature extraction using MDF.
Centroid feature was another important aspect inculcated in this research in which the signature was separated
into two equal parts and center of gravity of each part was calculated. In order to increase the accuracy of the
feature describing the surface area of any signature, triSurface feature was introduced. The length feature
represented the length of the signature. Finally two neural classifiers were used namely the Resilient Back
Propagation (RBP) neural network and the Radial Basis Function (RBF) neural network for the testing purposes.
Ye et al., [18] proposed an off-line signature verification system based on different scale wavelet
transforms used in the curvature signature signals transformation. The system works in 3 main steps (i) extract
the inflections of the signature curves using wavelet transform (ii) match the inflections of the template
signature sequence with test signature sequence and divide signature into stokes (iii) match the corresponding
strokes of the template signature sequence and the unknown signature sequence to arrive at the decision about
originality or forgery. The database consists of 240 genuine signatures from 20 Chinese authors each
contributing 12 signatures each. Six signatures are used for training purpose and the forged set consists of
random and skilled forgeries.
Das et al., [19] tried to improvise on the problems in biometric methods by proposing an approach for
off-line representation of the signatures. The application introduced was Directional Probability Density
Function (PDF) and feed forward NN with back propagation learning to random signatures in the verification
process. A number of methods were tried using the NN algorithm but the one that proved to be beneficial was
the implementation of the PSO-NN algorithm. Particle Swarm Optimization (PSO) algorithm was executed by
simulating social behavior among individuals and the NN structure provided a simple and effective way as a
search algorithm. This research solved the problems based on optimization.
Oliveira et al., [20] brought a revolution in the writer specific approach of the off-line signature
verification which was tedious and time consuming. Studies proved that ROC (Receiver Operating
Characteristic) graphs were the required factor for the off-line verification. Thus Writer Independent Approach
was introduced which was based on forensic document examination approach. The impacts of choosing different
fusion strategies to combine the partial decision achieved by the SVM classifiers were analyzed and the
experiments proved that the Max rule was more efficient than the original voting method proposed. Hence the
Writer Independent Approach proved to be more efficient in comparison with the Writer Specific Approach.
DOI: 10.9790/0661-17150823 www.iosrjournals.org 11 | Page
5. OSPCV: Off-line Signature Verification using Principal Component Variances
Juan and Youbin [21] present an offline signature verification system based on pseudo-dynamic
features both in writer-dependent and writer-independent mode. Features based on gray level are extracted using
Local Binary Pattern (LBP), Gray Level Co-occurrence Matrix (GLCM) and histogram of oriented gradients.
Writer-dependent SVM and Global Real Adaboost are used in classification.
Burcer et al., [22] designed a Conic Section Function Neural Network (CSFNN) for offline signature
recognition. It is a framework for Multi-Layer Perceptron (MLP) and Radial Basic Function (RBF) networks.
The CSFNN is trained by chip in the loop learning technique to compensate analog process variations. The
recognition is performed on CSFNN hardware using two different signature datasets. Guerbai et al., [23] design
an offline Handwritten Signature Verification System (HSVS). The curvelet transform and One-Class SVM
(OC-SVM) are used conjointly for the genuine signature verification.
Gady and Suneel [24] Explore an approach for reducing the variability associated with matching
signatures based on curve warping. It utilizes particle dynamics to minimize a cost function through an iterative
solution of first-order differential equation and is evaluated by measuring the precision and recall rates of
documents based on signature similarity. The Proposed approach can be used as a stand-alone system or
preprocessing stage to better aligns signatures before applying the signature recognition techniques.
Saulo and Cleber [25] develop a neural network of Radial Basis Functions optimized by differential
evolution algorithm with the features that discriminates between genuine signatures of simulated forgery. The
proposed method is better than in [26]. Histograms and symbolic data can be incorporated to improve the
performance.
Othman et al., [27] address the offline signature verification using artificial neural network approach. It
addresses and compares various approaches and challenges to develop the verification system for secure
services. A number of algorithms using ANN to address the issue of offline signature verification are evaluated.
III. Background
The comparison of various off-line signature verification methods are given in the Table 1. The brief
explanation is as follows.
Shekar and Bharathi [28] propose an Eigen-signature: A Robust and an Efficient offline signature
verification, Eigen and GLCM features model, consisting of two stages: preprocessing and Eigen-signature
construction. The signature is preprocessed and Eigen signature is constructed. Then the image undergoes
Signature recognition process which is computed using Euclidian distance. Database used is MUKOS explicitly
for kannada, containing 1350 signatures from 30 individuals.
Mustafa et al., [29] propose an Offline Signature Verification using Classifier Combination of HOG
and LBP Features. Database used is GPDS-300. The system performance is measured using the skilled forgery
tests of the GPDS methods-160 signature dataset. The SVM is used as classifier.
Miguel et al., [30] propose a Robustness of Offline Signature Verification Based on Gray Level
Features. Signature is acquired from MCYT and GPDS database, Check from Check database. Signature is
preprocessed and Features are extracted and LBP, LDP and LDerivP are found. SVM and histogram oriented
kernels or Kernel are used as a classifier.
Manoj and Puhan [31] investigate the trace transform based affine invariant features for offline
signature verification. The features are obtained from a normalized associated circus function using trace and
diametric functional. The affine relationship between intra-class and inter-class circus functions are converted to
a simple scale and shift correspondence through normalization. The similarity measures for same-writer and
different-writer pairs are used to decide the threshold. The proposed system is effective over a large
unconstrained database.
Mujahed et al., [32] propose an Offline Handwritten Signature Verification System Using a Supervised
Neural Network Approach based on back propagation algorithm. The results show accuracy, speed and
throughput is better on comparison with the benchmark algorithms and consumes less time for choosing
signature using available modern hardware.
Table 1: Comparison of various Off-line Signature Verification Methods
Author Approaches Database Performance/Result Advantages Disadvantages
Shekar and Eigen-signature and MUKOS Suitable for 5 samples of Used for Kannada Other state- of - art
Bharathi [28] GLCM feature based any dimension feature offline signatures Approaches give
algorithm vector better performance
Mustafa et HOG and LBP Features GPDS-300 LBP-Grid feature System does not Not work for
al.,[29] outperforms all other types require alignment of two
skilled forgeries of signatures and
the enrolling user gradient magnitudes
Miguel et al., Gray Level Features GPDS960 Gray LDerivP gives better result Used for Signatures Processing requires
[30] Signature for all dimension feature on check blending of check
MCYT vector and sign
DOI: 10.9790/0661-17150823 www.iosrjournals.org 12 | Page
6. OSPCV: Off-line Signature Verification using Principal Component Variances
Manoj and Trace CEDAR Receiver Operating possibility of design It cannot verify for
Puhan [31] Transform and circus Characteristics curve shows of new function more functional
functions better result invariants
Mujahed et Neural Network Database with Supervised learning is better Signatures No specific Database
al., [32] Approach 900 signatures than unsupervised learning verification can be is used
performed either
offline or online
Prashanth et Geometric points, GPDS 960 gray Geometric points are Simple and easy Geometric points
al., [34] Standard Scores images database compared using correlation implementation alone are enough
Correlation
Ramachandra Cross Validation and GPDS960 gray Geometric features are Verification results Euclidean distance is
et al., [35] Graph matching, images database compared using Euclidean are cross validated inferior toother
Euclidean distance distance methods
Nguyen et al., Enhanced Modified GPDS960 gray Signature samples are Directional features PCA outperforms
[36] Direction Feature, images database trained by NN are compared using EMDF and NN
Neural Networks, NN
Support Vector Machine
Proposed Pixel density „AND‟ GPDS 960 gray The system is trained for Efficient and error --
OSPCV Centre of Gravity images intra signature variations rate is lower
method distance, PCA
IV. System Model
In this section the definitions and the block diagram of Off-line Signature Verification by Analysis of
Principal Component Variances (OSPCV) system are discussed.
Definitions:
i. Signature: It is a handwritten illustration of a person‟s authentication depicted through lines and curves.
ii. False Accept Rate (FAR): The ratio of total number of forged signatures accepted to total number of
signatures used for comparison.
iii. False Rejection Ratio (FRR): The ratio of total number of original signatures rejected to total number of
signatures used for comparison.
iv. Equal Error Rate: It is the point of intersection of the FAR and FRR curves on the plot of FAR/FRR against
Threshold. It can also be defined as the common threshold at which both FAR and FRR are equal.
v. Average Error Rate: It is the average of the FAR and the FRR at a common threshold.
vi. Pixel Density: It is defined as the number of black pixels pertaining to the signature in the cell of size 5*5
after splitting.
vii. Centre of gravity distance: It is the distance of the centre black pixel in the cell from the left hand bottom
corner of the cell.
OSPCV System:
Fig.1 shows the block diagram of the OSPCV system. This system verifies the authenticity of the given
signature of a person provided a set of genuine signatures are given for reference. The signature database
consists of signatures from various individuals which are digitised by using a scanner. The pseudo dynamic
features are considered for the comparison. These features are extracted by dividing the image into smaller cells,
and each cell provides two features. The PCA tool is used for feature verification.
Fig .1. The block diagram of the OSPCV system.
DOI: 10.9790/0661-17150823 www.iosrjournals.org 13 | Page
7. OSPCV: Off-line Signature Verification using Principal Component Variances
1. Database: The Grupo de Procesado Digital de Senales (GPDS) database is used as input to the system. It
consists of signatures from 960 individuals each having 24 genuine signatures and 30 forged signatures. The
first one hundred individual‟s signatures are used for testing the algorithm. Sample original and forgery signals
are as shown in Fig. 2.
Original signature Forged signature
Fig.2. GPDS database signature samples.
2. Pre-processing: The scanned signature image obtained is pre-processed. The pre-processing stage consists of
the following steps: (i) The RGB image scanned is converted into a gray scale image and the intensities of pixels
are normalised to range from 0 to 1; (ii) The image is passed through a Gaussian filter to eliminate any noise if
present; (iii) The pixels whose intensity is less than 0.77 is made to have intensity 0 and others are made to have
intensity 1. This is done in order to retain only the pseudo dynamic features such as the high pen pressure
region; (iv) The boundaries of the signature are determined, and the parts which are not necessary are deleted;
(v) The image is resized to 100*200; (vi) The resized image is then split into smaller cells of size 5*5. The idea
is properly elaborated in the Fig. 3, which shows a window of size 5*5 slid across the entire image area, in other
words the entire image is split into smaller cells of size 5*5 from each of which the features are extracted.
Fig.3. Extraction of 5*5 cells from the image.
3. Feature Extraction: The pre-processed signature image contains 800 smaller cells of size 5*5. Two features
are extracted from each of the cells; they are pixel density and centre of gravity distance. Therefore we obtain
800 numbers of pixel density features and 800 number of centre of gravity distance features which gives a total
of 1600 feature points. Extraction of pixel density feature: Pixels are extracted by counting the total number of
black pixels present in an image cell. The Fig. 4 shows the cell which has been split from the signature image.
Fig.4. A 5x5 cell.
In this cell all the black pixels i.e., the pixels with intensity 0 are considered and the total number of
black pixels is counted by using a counter in the program. The black pixels to be counted are shown darkened in
Fig. 5, and the counting is done according to (1):
Fig.5. Cell showing the Black Pixels to be counted for Pixel Density.
DOI: 10.9790/0661-17150823 www.iosrjournals.org 14 | Page
8. OSPCV: Off-line Signature Verification using Principal Component Variances
5 5
i, j 0If
I
s then
Pd Pd 1
----- (1)
i1 j 1
Extraction of centre of gravity distance features: The COG distance feature is extracted by dividing the
total number of pixels in a cell by two and by taking the ceiling value of that number. This gives the centre of
gravity of the cell; it is given by (2). The pixel at this count is considered and the position of this pixel is
extracted. The coordinates of this pixel is used to find its distance from the left bottom corner of the cell. The
Fig.6 illustrates the process and the same is represented in the formula in the (3):
Fig.6. Process of finding the CoG distance.
CoG = ceil
Pd
----- (2)
2
CoGdis = k − x 2 + L − y 2
---------------- (3)
4. Data Processing: The pixel density and centre of gravity distance features are processed by Principal
Component Analysis. The PCA concept is explained as follows.
4.1 Principal Component Analysis (PCA): It involves a mathematical procedure that transforms a number of
possibly correlated variables into a smaller number of uncorrelated variables called principal components. The
first principal component accounts for as much of the variability in the data as possible, and each succeeding
component accounts for as much of the remaining variability as possible.
It is used as a tool in exploratory data analysis and for making predictive models. PCA involves the
calculation of the Eigen value decomposition of a data covariance matrix or singular value decomposition of a
data matrix, usually after mean centring of the data for each attribute. The results of a PCA are usually discussed
in terms of component scores and loadings. PCA is the simplest of the true eigenvector based multivariate
analyses. Often, its operation can be thought of as revealing the internal structure of the data in a way which best
explains the variance in the data.
4.1.1 Computing PCA using the covariance method
A detailed description of PCA using the covariance method is explained in the following section. The
main aim of PCA is to convert a given data set X of dimension M to an alternative data set Y of smaller
dimension L. So, we are to find the matrix Y, where Y is the Karhunen–Loève transform (KLT) of matrix X, it is
shown in the (4):
Y = KLT X ---- (4)
(a) Organizing the data set
Consider a data set of observations of M variables, which need to be reduced so that each observation
can be described with only L variables, L < M. The data is arranged as a set of N data vectors X1, X2 ,.....X N with
each X N representing a single grouped observation of the M variables. X1 , X 2 ,.....X N are taken as column
vectors, each of which has M rows. The column vectors are placed into a single matrix X of dimensions M × N.
(b) Calculation of the empirical mean
The empirical mean along each dimension
m 1 M
is found. The calculated mean values are placed into an
empirical mean vector u of dimensions M × 1 and this is given by the (5):
u m =
1
n=1
N X[m,n] ------------------ (5)N
DOI: 10.9790/0661-17150823 www.iosrjournals.org 15 | Page
9. OSPCV: Off-line Signature Verification using Principal Component Variances
(c) Calculation of the deviations from the mean
Mean subtraction is an integral part of the solution for finding a principal component as it minimizes
the mean square error of the approximation of the data. When mean subtraction is not performed, the first
principal component will correspond to the mean of the data. Hence it is absolutely necessary to perform mean
subtraction (or "mean centering"), so that it ensures that the first principal component describes the direction of
maximum variance, which can be used for the deciphering. Therefore the centering of data is performed by
subtracting the empirical mean vector u from each column of the data matrix X. The mean-subtracted data is
stored in the M × N matrix B, as given by the (6):
B=X-uh ----- (6)
Where, h denotes a 1 x N row vector of all 1's, which is given in the form of (7):
h[n]=1 for n=1…N ---- (7)
(d) Finding the covariance matrix
The M × M empirical covariance matrix C is found by using the formula in (8):
C= E[B⊗ B] = E B. B∗ =
1
B. B∗
---- (8)N
Where,
E denotes the expected value operator,
denotes the outer product operator, and
denotes the conjugate transpose operator.
(e) Find the eigenvectors and Eigen values of the covariance matrix
The matrix
V
of eigenvectors which diagonalizes the covariance matrix
C
is calculated using the (9):
V−1CV = D ---- (9)
D is the diagonal matrix which has the eigenvalues of
C
. The Matrix D will take the form of an M × M
diagonal matrix, where:
D[p,q]=λm for p=q=m ---- (10)
The (10) is the m
th
Eigen value of the covariance matrix C, and (11):
D[p,q]= 0 for p≠q ---- (11)
Matrix V, is also of dimensions M × M, containing M column vectors, each of length M, which
represent the M eigenvectors of the covariance matrix C. The Eigen values and eigenvectors so obtained are
ordered and paired. Thus the m
th
Eigen value corresponds to the m
th
eigenvector.
(f) Rearranging the Eigenvectors and Eigen values
The columns of the eigenvector matrix V and Eigen value matrix D are sorted in the order of
decreasing Eigen value, to make sure that the first principal component has the maximum variation.
(g) Computation of the cumulative energy content for each Eigenvector
The Eigen values denote the distribution of the energy of source data among each of the eigenvectors,
where the eigenvectors form a basis for the data. The sum of the energy content across all of the Eigen values
from 1 through m is the cumulative energy content g for the m
th
eigenvector. It is as shown in the (12):
g m = q=1
m D[q,q]
for m=1…M -----(12)
(h) Selection of a subset of the eigenvectors as basis vectors
The first L columns of V are saved as the M × L matrix W, this is illustrated in (13):
W[p,q]= V[p,q] for p=1…M, q=1..L ---- (13)
Where 1 ≤ L ≤ M
DOI: 10.9790/0661-17150823 www.iosrjournals.org 16 | Page
10. OSPCV: Off-line Signature Verification using Principal Component Variances
The vector g is used as a guide in choosing an appropriate value for L. The aim is to choose a value of L as
small as possible while achieving a reasonably high value of g on a percentage basis. For example, L may be
chosen so that the cumulative energy g is above a certain threshold, like 90 percent. In this case, the smallest
value of L is chosen such that (14) is satisfied:
g[m = l] ≥ 90%------------ (14)
(i) Convert the source data to z-scores
An M*1 empirical standard deviation vector S is created from the square root of each element along the main
diagonal of the covariance matrix C, as given in (15):
S={s[m]} = C p,q , for p = q = m = 1 … . M --------------- (15)
The M x N z-score matrix is calculated by using the (16); it should be made sure that the division is
performed element-by-element. While this step is useful for various applications as it normalizes the data set
with respect to its variance, it is not integral part of PCA/KLT:
Z =
B -------------------------- (16)
S.h
(j) Project the z-scores of the data onto the new basis
The projected vectors are the columns of the matrix given by the (17):
Y=W
*
.Z=KLT{X}-----------(17)
Where W
*
is the conjugate transpose of the eigenvector matrix. And the columns of matrix Y represent the
Karhunen– Loève transform (KLT) of the data vectors present in the columns of matrix X .
(k) Derivation of PCA using the covariance method
Consider X to be a d-dimensional random vector expressed as column vector. Without losing generality, assume
X has zero mean. We need to find a d * d orthonormal transformation matrix P such that (18) is satisfied, with
the constraint that cov(Y ) is a diagonal matrix and P
1
P
T
:
Y = P
T
X------------ (18)
By substitution, and matrix algebra, we obtain (19), which is further simplified to obtain (20).
Cov(Y) = E[YY
T
] ---------------(19)
= E[(P
T
X)(P
T
X)
T
]
= E[(P
T
X)(X
T
P)]
= P
T
E[XX
T
]P
= P
T
cov(X)P
We now have
Pcov(Y) = PP
T
cov(X)P -------------(20)
= cov(X)P
P is rewritten as d number of d x 1 column vectors, so
P = [ P1,P2,….,Pd]------------ (21)
and cov(Y) as
λ1 ⋯ 0
⋮ ⋱ ⋮------------ (22)
0 ⋯ λd
Substituting (21) and (22) into (20), we obtain (23).
[λ1P1, λ2 P2 … . λd Pd ] = [cov(X)P1, cov(X)P2,…., cov(X)Pd] ------------------
(23)
It is noted that inλi Pi = cov(X)Pi . Pi is an eigenvector of the covariance matrix of X. Thus, by finding the eigenvectors of the covariance matrix of X, we find a projection matrix P that satisfies the original constraints.
DOI: 10.9790/0661-17150823 www.iosrjournals.org 17 | Page
11. OSPCV: Off-line Signature Verification using Principal Component Variances
The data analysis by PCA consists of two stages. They are: (i) Signature training and (ii) Test signature
analysis by PCA, which is explained as follows.
(i) Signature training: In this process eight original signatures are taken and divided into two different groups of four
each, named as M and N. The M group signatures are considered as reference signatures. The features extracted
from the signatures are analysed separately. The pixel density features of the M group signatures are arranged in a
matrix A with each signature representing a column. The N group signatures are taken one at a time and its
features are inserted as the last column in the matrix A. The principal components of the matrix A are found. The
variances of the principal components and the cumulative sum of the variances are found. The first value in the
cumulative sum array is stored in another matrix B. The same process is performed for the other 3 signatures of
group N and the array B is filled in. The average of the array B is found and the threshold value is added to the
truncated value of the average. This forms the ideal comparison value (I) for the OSPCV system. The process is
repeated for the centre of gravity feature also.
(ii) Test Signature Analysis: The pixel density features of the M group signatures are arranged in a matrix T with
each signature representing a column. The test signature‟s features are inserted as last column in the matrix T.
The principal components of the matrix T are found. The variances of the principal components and the
cumulative sum of the variances are found. The first value in the cumulative sum array (K) is the value to be
compared with the ideal comparison value (I). The process is repeated for the centre of gravity feature also.
V. Comparison and Decision
The variances are represented by the energy of the Principal Components accounted for by its Eigen
values. Thus if principal component has more energy then it belongs to same group. Hence the values K and I
are compared as follows: If K < I which means that if the test signature has less energy than the reference
signatures then the signature is forged else if K >= I which means the energy of test signature is more than
reference signature then the signature is genuine.
VI. OSPCV Algorithm
Problem definition:
Given a signature whose authenticity is to be verified, the goal is to:
(i) Pre-process the obtained signatures.
(ii) Extract the centre of gravity distance and the pixel density features.
(iii) Test the authenticity of the test signature by using Principal Component Variances.
Table 1 shows the algorithm for the proposed system known as the OSPCV, which verifies the
authenticity of a given test signature. Digitising the test signature and the genuine signature database is done
using a scanner. The signature is pre-processed as per the steps shown in the algorithm. The image is split into
smaller cells and features are extracted from each cell. The features are fed into PCA to obtain the similarities of
the signature. And the decision is made based on similarities between genuine and test signatures.
Table 1: OSPCV Algorithm
Input: Database of genuine and forgery signatures.
Output: Decision stating matching or not matching.
(i) Acquire the signature images from the database chosen as well as the test signature.
(ii) The RGB images are converted to gray scale images. Only the exact signature area is considered for
further processing. Noise removal, thinning are performed.
(iii) The image obtained from the previous stage is resized to 100x200 and split into smaller cells of size 5*5.
(iv) The pixel density and centre of gravity distance features are extracted from each of the cells.
(v) The extracted features are processed using the PCA.
(vi) The value obtained from PCA for the test signature and the database are compared.
(vii) The decision is made on the basis of relation between the first value in the cumulative sum array (K)
and the ideal comparison value (I).
VII. Experimental Results and Performance Analysis
The experiment is carried out on the GPDS960 database [33], which consists of signatures from 100
individuals each having 24 genuine signatures and 30 forged signatures which amount to a total of 6400
signatures having 2400 genuine and 3000 forged signatures. All the images are resized to 100*200. The
programming software used for execution of the proposed algorithm is the MATLAB.
DOI: 10.9790/0661-17150823 www.iosrjournals.org 18 | Page
12. OSPCV: Off-line Signature Verification using Principal Component Variances
Table 2: FAR and FRR vs. the Threshold values for the PD, CoG distance, PD OR CoG distance and PD
AND Centre of Gravity distance feature for GPDS database [100x200]
Pixel Density Centre of Gravity PD OR CoG PD AND CoG
Threshold (PD) distance distance distance
FAR FRR FAR FRR FAR FRR FAR FRR
-5 84.18 1.6 80.76 0.42 88.8 1.6 76.15 0.42
-4 71.53 4.16 66.83 2.02 77.6 4.7 60.76 1.49
-3 56.49 8.65 51.36 4.8 63.07 9.4 44.78 4.05
-2 38.97 15.38 33.41 11.85 44.7 17.2 27.69 10.04
-1 23.58 25.42 18.03 21.58 27.52 27.99 15.1 19.01
0 12.05 39.42 9.14 37.82 15.55 43.8 5.64 33.44
1 4.27 53.09 3.33 54.48 5.55 60.47 2.05 47.11
2 1.45 64.95 0.76 68.91 1.88 71.79 0.34 62.07
3 0.34 75.32 0.17 77.77 0.42 80.02 0.08 73.07
4 0.08 82.26 0 82.37 0.08 84.72 0 79.91
5 0 86.32 0 84.5 0 87.28 0 83.54
TABLE 2 shows variation of FAR and FRR with threshold values for the case of Pixel Density (PD),
centre of gravity distance, PD OR CoG distance and PD AND CoG distance features. The threshold is varied
from -5 to +5 and the corresponding FAR and FRR are tabulated. It can be seen that FAR and FRR vary
inversely, that is an increase in FAR means decrease in FRR and vice versa. An optimum threshold is chosen
such that both FAR and FRR are in permissible limits. Experimental results have shown that optimum FAR and
FRR are obtained when threshold lies in range of -2 to 0.
The four graphs of FAR and FRR against threshold in case of GPDS database obtained for four
different features are shown in Figure 7. As threshold increases, the value of FRR increases and FAR decreases.
The value of EER obtained for pixel density and center of gravity distance features is 24.07 and 20.2
respectively for optimal threshold of -1.0722 and - 1.141 at a point where FAR equals to FRR. The PD and CoG
distance features are fused by logical operations OR and AND. The corresponding graphs are also shown in the
Fig.7. It is found that the value of EER is 27.81 and 17.06 for OR and AND operations respectively at optimal
threshold of -1.01 and -1.217.
Fig.7. FAR/FRR against Threshold plot for Pixel density, Centre of Gravity distance, PD OR CoG distance
and PD and CoG distance features for GPDS database [100x200]
DOI: 10.9790/0661-17150823 www.iosrjournals.org 19 | Page
13. OSPCV: Off-line Signature Verification using Principal Component Variances
Fig.8. Receiver Operating Characteristics of OSPCV system.
The Receiver Operating Characteristics for different features of OSPCV system is shown in Fig.8. The
ROC consists of plot of FRR versus FAR for the features of pixel density, centre of gravity distance, pixel
density OR centre of gravity distance, and pixel density AND centre of gravity distance. The graph shows that
system performance for pixel density AND centre of gravity distance is better than other cases.
Table 3 shows the comparison of performance of the OSPCV model with other contributions, which
have used the same GPDS database and it shows a notable improvement when compared to those systems.
Table 3. Comparison of EER values of the proposed model with the existing models for GPDS database.
Reference Method % EER
Prashanth et al., [34] Pixel density and geometric points, Standard
30.04
(SSCOSV) Scores Correlation
Ramachandra et al., [35] Cross Validation and Graph matching,
24.0
(SCGMC) Euclidean distance
Nguyen et al., [36]
Enhanced Modified Direction Feature, Neural
20.07
Networks, Support Vector Machine
Pixel density „AND‟ Centre of Gravity
Proposed OSVPCV method distance, Analysis of Principal Component 17.06
Variances
The graphical comparison of percentage EER values of the proposed OSPCV with pixel density AND
centre of gravity distance feature with publically reported results on the GPDS database is shown in the Fig.9.
The figure shows that the performance of the proposed system is better than others [26, 27, 28] using the GPDS
database.
Fig.9. Comparison of Receiver Operating Characteristics of the proposed OSPCV system with other methods.
DOI: 10.9790/0661-17150823 www.iosrjournals.org 20 | Page
14. OSPCV: Off-line Signature Verification using Principal Component Variances
VIII. Conclusions
In this paper, Off-line Signature Verification using Principal Component Variances is presented. In
signature verification process, two problems are encountered. They are intra-person variations (variations of the
same person‟s signature taken at different time instances), and inter-person variations (variations between the
same signatures signed by two different people). Both the problems have to be successfully counterbalanced by
the system. The proposed system uses Pixel density and Centre of gravity distance features for representing the
signature image and the Principal Component Variances to analyse the features and arrive at a suitable solution
for verifying signatures. All experimental results have demonstrated that the proposed method achieves high
performance. Hence we conclude that the proposed system can be used effectively for Off-line Signature
Verification purpose with great reliability as it gives minimum error compared to existing systems.
Since we have developed our own technique for incorporating the PCA, we reckon that this algorithm
can be effectively extended to other areas of biometric verifications like the Face recognition, Retina
Identification, Fingerprint verification with small modifications. The performance of the system can further be
improved by fusing the current technique with other techniques such as SVM and Neural Networks.
References
[1]. Mustafa Agil Muhamad Balbed, Sharifah mumtazah Syed Ahmad, and Asma Shakil, “ANOVA-Based feature Analysis and
Selection in HMM based Off-line Signature Verification System,” IEEE Conference on Innovative Technologies in Intelligent
Systems and Industrial Applications, pp. 66-69, 2009.
[2]. H. N Prakash and D S. Guru, “Geometric Centroids and their Relative Distances for Off-line Signature Verification,” IEEE
International Conference on Document Analysis and Recognition, pp. 121-125, 2009.
[3]. A Ismail, M A Ramadan, T El Danf, and A H Samak, “Automatic Signature Recognition and Verification using Principal
Components Analysis,” IEEE International Conference on Computer Graphics, Imaging and Visualisation, pp. 356-361, 2009.
[4]. V. L Blankers, C. E. Van den Heuvel, K. Y Franke and L G. Vuurpijl, “The ICDAR 2009 Signature Verification Competition,”
International Conference on Document Analysis and Reorganization, pp. 1403 - 1407, 2009.
[5]. Jesus F. Vargas, Miguel A, Carlos M Travieso, and Jesus B Alonso, “Off-line Signature Verification System based on Pseudo-
Cepstral Coefficients” International Conference on Document Analysis and Recognition, pp. 126-130, 2009.
[6]. Ioana Barbantan, Camelia Vidrighin, Raluca Borca, “An Off-line System for Handwritten Signature Recognition,” IEEE
Interntional Coneference on Intelligent Computer Communication and Processing, pp. 3 -10, 2009.
[7]. J Francisco Vargas, Miguel A Ferrer, Carlos M Travieso and Jesus B Alonso, “Off-line Signature Verification based on High
Pressure Polar Distribution,” International Conference on Frontiers in Handwritting Recognition, pp. 373-378, 2008.
[8]. Ismail A Ismail, Mohamed A Ramadan, Talaat S El-Danaf and Ahmed H Samak, “An Efficient Off-line Signature Identification
Method based on Fourier Descriptor and Chain Codes,” International Journal of Computer Science and Network Security, Vol. 10,
No.5, pp.29-35, 2010.
[9]. Siyuan Chen and Sargur Srihari, “A New Off-line Signature Verification method based on Graph Matching,” International
Conference on Pattern Recognition, pp. 869-872, 2009.
[10]. Emre Ozgunduz, Tulin Şentrk and M Elif Karslıgil, “Off-line Signature Verification and Recognition by Support Vector Machine,”
European Signal Processing Conference, pp. 1-6, 2005.
[11]. Banshider Majhi, Y Santhosh Reddy, and D Prasanna Babu, “Novel Features for Off-line Signature Verification,” International
Journal of Computers, Communications and Control, vol. 1, no. 1, pp. 17-24, 2006.
[12]. A Piyush Shanker and A N Rajagopalan, “Off-line Signature Verification using DTW,” Pattern Recognition Letters, vol. 28, no. 12,
pp 1407-1414, 2007.
[13]. A El-Yacoubi, E J R Justino, R Sabourin, and E Bortolozzi, “Off-line signature verification using HMMs and Cross validations,”
IEEE Signal Processing Society Workshop, pp 859-868, 2000.
[14]. Simone Marinai, Marco Gori and Giovanni Soda, “Artificial Neural Networks for Document Analysis and Recognition,” IEEE
Transactions on Pattern Analysis and Machine Intelligence, Vol. 27, No. 1, pp.23-35, 2005.
[15]. Robert Sabourin, Jean Pierre Drouhard, “Off-line Signature Verification System using Directional PDF and Neural Networks,”
International Conference on Pattern Recognition Methodology and Systems, pp 321-325, 1992.
[16]. Muhammad Reza Pourshahabi, Mohamad Hoseyn Sigari, and Hamid Reza Pourreza,” Off-line Hand Written Signature
Identification and Verification using Contourlet Transform,” IEEE International Conference of Soft Computing and Pattern
Recognition, pp. 670-673, 2009.
[17]. Stephane Armand, Michael Blumenstein, and Vallipuram Muthukkumarasmy, “Off-line Signature Verification based on the
Modified Direction Feature,” International Conference on Pattern Recognition, pp. 509-512, 2006.
[18]. Xiufen Ye, Weiping Hou, and Weixing Feng, “Off-line Handwritten Signature Verification with Inflections Feature,” International
Conference on Mechatronics and Automation, pp 787-792, 2005.
[19]. M Taylan Das and L Canan Dulger, “Off-line Signature Verification with PSO-NN Algorithm,” International Symposium on
Computer and Information Science, pp.1-6, 2007.
[20]. Luiz S Oliveira, Edson Justino, and Robert Sabourin, “Off-line Signature Verification using Writer-Independent Approach,”
International Joint Conference on Neural Networks, pp. 2539-2544, 2007.
[21]. Juan Hu and Youbin Chen, “Offline Signature Verification Using Real Adaboost Classifier Combination of Pseudo-dynamic
Features,” 12th International Conference on Document Analysis and Recognition, pp.1345-1349, 2013.
[22]. Burcer Erkman, Nihan Kahraman, Revna Acar Viral, and Tulay Yildirin, “Conic Section Function Neural Network Circuitry For
Offline Signature Recognition,” IEEE Transactions on Neural Networks, vol. 21, no. 4, pp. 667-672 April 2010.
[23]. Y Guerbai, Y Chibani and B Hadjadji, “Writer-Independent Handwritten Signature Verification based on One-Class SVM
Classifier,” International Joint Conference on Neural Networks, pp. 327-331, 2014.
[24]. Gady Agam and Suneel Suresh, “Warping-Based Offline Signature Recognition,” IEEE Transactions on Information Forensics and
Security, vol. 2, no. 3, pp. 430-437, September 2007.
[25]. Saulo Henrique Leoncio de Medioros Naploes, Cleber Zanchettin, “Offline Handwritten Signature Verification through Network
Radial Basis Functions Optimized by Differential Evolution,” IEEE World Congress on Computational Intelligence, pp. 10-15, June
2012
DOI: 10.9790/0661-17150823 www.iosrjournals.org 21 | Page
15. OSPCV: Off-line Signature Verification using Principal Component Variances
[26]. Vargas J. F, Ferrer, M. A. Travieso, C. M. Alonso J. B., “Offline Signature Verification based on Gray Level Information using
Texture Features,” Pattern Recognition , ISSN: 0031-320,vol. 44, no. 2, pp. 1037-1045, 2010.
[27]. Othman o-khalifa, Md. Khorshed Alam, Aisha Hassan Abdalla, “An Evaluation on Offline Signature Verification using Artificial
Neural Network Approach,” International Conference On Computing, Electrical And Electronic Engineering (ICCEEE), pp. 36 8-
371, 2013.
[28]. B. H. Shekar and R. K. Bharathi, “Eigen-signature: A Robust and an Efficient Offline Signature Verification Algorithm,” IEEE-
International Conference on Recent Trends in Information Technology, pp. 134-138, June 2011.
[29]. Mustafa Berkay Yilmaz, Berrin Yanikoglu, Caglar Tirkaz and Alisher Kholmatov, “Offline Signature Verification using Classifier
Combination of HOG and LBP Features,” IEEE International conference of Biometrics, pp. 1-7, 2011.
[30]. Miguel A. Ferrer, J. Francisco Vargas, Aythami Morales, and Aarón Ordóñez “Robustness of Offline Signature Verification Based
on Gray Level Features,” IEEE Transactions on Information Forensics and Security, vol. 7, no. 3, pp. 966 -977, June 2012.
[31]. M. Manoj Kumar and N. B. Puhan, “Offline Signature Verification using the Trace Transform,” IEEE International Advance
Computing Conference, pp. 1066-1070, 2014.
[32]. Mujahed Jarad, Nijad Al-Najdawi and Sara Tedmori “Offline Handwritten Signature Verification System using a Supervised Neural
Network Approach,” International Conference on Computer Science and Information Technology, pp. 189-195, 2014.
[33]. M. Blumenstein, Miguel A. Ferrer and J. F. Vargas, “Off-line Signature Verification Competition: Scenario2,” International
Conference on Frontiers in Handwriting Recognition, pp. 721-726, 2010.
[34]. Prashanth C. R, K. B. Raja, K. R. Venugopal, and L. M. Patnaik, “Standard Scores Correlation based Off-line Signature Verification
System,” IEEE Conference on Advances in Computing, Control and Telecommunication Technologies, pp. 49 -53, 2009.
[35]. Ramachandra A. C, Ravi J, K. B. Raja, Venugopal K R.. and L M. Patnaik, “Signature Verification using Graph Matching and
Cross-Validation Principle,” ACEEE International Journal of Recent Trends in Engineering, vol. 1, no. 1, pp. 57-61, 2009.
[36]. Vu Nguyen, Michael Blumenstein, Vallipuram Muthukkumarasamy and Graham Leedham, “Off-line Signature Verification using
Enhanced Modified Direction Features in Conjunction with Neural Classifiers and Support Vector Machines," IEEE International
Conference on Document Analysis and Recognition, vol. 2, pp. 734-738, 2007.
Author Biography
Arunalatha J S is an Associate Professor in the Department of Computer Science and Engineering at
University Visvesvaraya College of Engineering, Bangalore University, Bangalore, India.
She obtained her Bachelor of Engineering from P E S College of Engineering, Mandya, in
Computer Science and Engineering, from Mysore University and she received her Master‟s
degree in Computer Science and Engineering from University Visvesvaraya College of
Engineering, from Bangalore University, Bangalore. She is presently pursuing her Ph.D in
the area of Biometrics. Her research interest is in the area of Biometrics and Image
Processing.
Prashanth C R received the BE degree in Electronics and the ME degree in Digital Communication and the
Ph.D. degree in Computer Science and Engineering from Bangalore University, Bangalore.
He is currently Professor, Department of Telecommunication Engineering, Dr. Ambedkar
Institute of Technology, Bangalore. His research interests include Computer Vision, Pattern
Recognition, Biometrics, and Communication Engineering. He has over 15 research
publications in refereed International Journals and Conference Proceedings. He has served
as a member of Board of Examiners for Bangalore University and Visvesvaraya
Technological University. He is a member of IACSIT, and life member of Indian Society
for Technical Education, NewDelhi.
Tejaswi V is a student of Computer Science and Engineering from National Institute of Technology, Surathkal.
She completed her B.Tech in Computer Science and Engineering from Rastriya Vidyalaya
College of Engineering, Bangalore. Her research interest is in the area of Wireless Sensor
Networks, Biometrics, Image Processing and Big data.
Shaila K is the Professor and Head in the Department of Electronics and Communication Engineering at
Vivekananda Institute of Technology, Bangalore, India. She obtained her B.E in Electronics
and M.E degrees in Electronics and Communication Engineering from Bangalore University,
Bangalore. She obtained her Ph.D degree in the area of Wireless Sensor Networks in
Bangalore University. She has authored and co-authored two books. Her research interest is
in the area of Sensor Networks, Adhoc Networks and Image Processing.
DOI: 10.9790/0661-17150823 www.iosrjournals.org 22 | Page
16. OSPCV: Off-line Signature Verification using Principal Component Variances
K B Raja is an Assistant Professor, Department of Electronics and Communication Engineering, University
Visvesvaraya College of Engineering, Bangalore University, Bangalore. He obtained his BE
and ME in Electronics and Communication Engineering from University Visvesvaraya
College of Engineering, Bangalore. He was awarded Ph.D. in Computer Science and
Engineering from Bangalore University. He has over 130 research publications in refereed
International Journals and Conference Proceedings. His research interests include Image
Processing, Biometrics, VLSI Signal Processing, computer networks.
Dinesh Anvekar is currently working as Head and Professor of Computer Science and Engineering at Nitte
Meenakshi Institute of Technology (NMIT), Bangalore. He obtained his Bachelor degree
from University of Visvesvaraya college of Engineering. He received his Master and Ph.D
degree from Indian Institute of Technology. He received best Ph. D Thesis Award of Indian
Institute of Science. He has 15 US patents issued for work done in IBM Solutions Research
Center, Bell Labs, Lotus Interworks, and for Nokia Research Center, Finland. He has filed
75 patents. He has authored one book and over 55 technical papers. He has received
Invention Report Awards from Nokia Research Center, Finland, Lucent Technologies (Bell
Labs) Award for paper contribution to Bell Labs Technical Journal and KAAS Young
Scientist Award in Karnataka State, India.
Venugopal K R is currently the Principal, University Visvesvaraya College of Engineering, Bangalore
University, and Bangalore. He obtained his Bachelor of Engineering from University
Visvesvaraya College of Engineering. He received his Masters degree in Computer Science
and Automation from Indian Institute of Science Bangalore. He was awarded Ph.D. in
Economics from Bangalore University and Ph.D. in Computer Science from Indian
Institute of Technology, Madras. He has a distinguished academic career and has degrees
in Electronics, Economics, Law, Business Finance, Public Relations, Communications,
Industrial Relations, Computer Science and Journalism. He has authored and edited 51
books on Computer Science and Economics, which include Petrodollar and the World
Economy, C Aptitude, Mastering C, Microprocessor Programming, Mastering C++ and Digital Circuits and
Systems etc., He has filed 75 patents. During his three decades of service at UVCE he has over 400 research
papers to his credit. His research interests include Computer Networks, Wireless Sensor Networks, Parallel and
Distributed Systems, Digital Signal Processing and Data Mining.
S S Iyengar is currently Ryder Professor, Florida International University, USA. He was Roy Paul Daniels
Professor and Chairman of the Computer Science Department of Louisiana state University.
He heads the Wireless Sensor Networks Laboratory and the Robotics Research Laboratory
at USA. He has been involved with research in High Performance Algorithms, Data
Structures, Sensor Fusion and Intelligent Systems, since receiving his Ph.D degree in 1974
from MSU, USA. He is Fellow of IEEE and ACM. He has directed over 40 Ph.D students
and 100 post graduate students, many of whom are faculty of Major Universities worldwide
or Scientists or Engineers at National Labs / Industries around the world. He
has published more than 500 research papers and has authored/co-authored 6 books and edited 7 books. His
books are published by John Wiley and Sons, CRC Press, Prentice Hall, Springer Verlag, IEEE Computer
Society Press etc., one of his books titled Introduction to Parallel Algorithms has been translated to Chinese.
L M Patnaik is currently Honorary Professor, Indian Institute of Science, Bangalore, India. He was a Vice
Chancellor, Defense Institute of Advanced Technology, Pune, India and was a Professor
since 1986 with the Department of Computer Science and Automation, Indian Institute of
Science, Bangalore. During the past 35 years of his service at the Institute he has over 700
research publications in refereed International Journals and refereed International
Conference Proceedings. He is a Fellow of all the four leading Science and Engineering
Academies in India; Fellow of the IEEE and the Academy of Science for the Developing
World. He has received twenty national and international awards; notable among them is
the IEEE Technical Achievement Award for his significant contributions to High
Performance Computing and Soft Computing. His areas of research interest have been Parallel and Distributed
Computing, Mobile Computing, CAD, Soft Computing and Computational Neuroscience.
DOI: 10.9790/0661-17150823 www.iosrjournals.org 23 | Page