This document summarizes a research paper that proposes a system for offline signature verification through score-level fusion of multiple classifiers. The system uses different local features (HOG, LBP, SIFT) extracted at various scales and fuses them within classifiers. It also incorporates both user-dependent classifiers trained for each individual, and a global classifier trained on all users. Evaluated on the public GPDS-160 database, the full fusion system achieves a state-of-the-art equal error rate of 6.97% for skilled forgery detection, without requiring skilled forgeries during enrollment.
DDSGA: A Data-Driven Semi-Global Alignment Approach for Detecting Masquerade ...1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
An Efficient Approach for Requirement Traceability Integrated With Software R...IOSR Journals
Abstract: Traceability links between requirements of a system and its source code are helpful in reducing
system conception effort. During software updates and maintenance, the traceability links become invalid since
the developers may modify or remove some features of the source code. Hence, to acquire trustable links from a
system source code, a supervised link tracing approach is proposed here. In proposed approach, IR techniques
are applied on source code and requirements document to generate baseline traceability links. Concurrently,
software repositories are also mined to generate validating traceability links i.e. Histrace links which are then
called as experts. Now a trust model named as DynWing is used to rank the different types of experts. DynWing
dynamically assigns weights to different types of experts in ranking process. The top ranked experts are then fed
to the trust model named as Trumo. Trumo validates the baseline links with top ranked experts and finds the
trustable links from baseline links set. While validating the links, Trumo is capable of discarding or re-ranking
the experts and finds most traceable links. The proposed approach is able to improve the precision and recall
values of the traceability links.
Index Terms: Traceability, requirements, features, source code, repositories, experts.
Offline Handwritten Signature Identification and Verification using Multi-Res...CSCJournals
In this paper, we are proposing a new method for offline (static) handwritten signature identification and verification based on Gabor wavelet transform. The whole idea is offering a simple and robust method for extracting features based on Gabor Wavelet which the dependency of the method to the nationality of signer has been reduced to its minimal. After pre-processing stage that contains noise reduction and signature image normalisation by size and rotation, a virtual grid is placed on the signature image. Gabor wavelet coefficients with different frequencies and directions are computed on each points of this grid and then fed into a classifier. The shortest weighted distance has been used as the classifier. The weight that is used as the coefficient for computing the shortest distance is based on the distribution of instances in each of signature classes. As it was pointed out earlier, one of the advantages of this system is its capability of signature identification and verification of different nationalities; thus it has been tested on four signature dataset with different nationalities including Iranian, Turkish, South African and Spanish signatures. Experimental results and the comparison of the proposed system with other systems are consistent with desirable outcomes. Despite the use of the simplest method of classification i.e. the nearest neighbour, the proposed algorithm in comparison with other algorithms has very good capabilities. Comparing the results of our system with the accuracy of human\'s identification and verification, it shows that human identification is more accurate but our proposed system has a lower error rate in verification.
Automatic signature verification with chain code using weighted distance and ...eSAT Journals
Abstract The signature forgery can be restricted by either online or offline signature verification techniques. It verifies the signature by
performing a match with the pre-processed signature dynamically by detecting the motion of stylus during signature while on
other hand, offline verifies by performing a match using the two dimensional scanned image of the signature. This paper studies
about the various techniques available in offline signature verification along with their shadows.
Keywords: Signature Verification, Weighted Distance, High Pressure Factor, Normalization, Threshold Value
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
DDSGA: A Data-Driven Semi-Global Alignment Approach for Detecting Masquerade ...1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
An Efficient Approach for Requirement Traceability Integrated With Software R...IOSR Journals
Abstract: Traceability links between requirements of a system and its source code are helpful in reducing
system conception effort. During software updates and maintenance, the traceability links become invalid since
the developers may modify or remove some features of the source code. Hence, to acquire trustable links from a
system source code, a supervised link tracing approach is proposed here. In proposed approach, IR techniques
are applied on source code and requirements document to generate baseline traceability links. Concurrently,
software repositories are also mined to generate validating traceability links i.e. Histrace links which are then
called as experts. Now a trust model named as DynWing is used to rank the different types of experts. DynWing
dynamically assigns weights to different types of experts in ranking process. The top ranked experts are then fed
to the trust model named as Trumo. Trumo validates the baseline links with top ranked experts and finds the
trustable links from baseline links set. While validating the links, Trumo is capable of discarding or re-ranking
the experts and finds most traceable links. The proposed approach is able to improve the precision and recall
values of the traceability links.
Index Terms: Traceability, requirements, features, source code, repositories, experts.
Offline Handwritten Signature Identification and Verification using Multi-Res...CSCJournals
In this paper, we are proposing a new method for offline (static) handwritten signature identification and verification based on Gabor wavelet transform. The whole idea is offering a simple and robust method for extracting features based on Gabor Wavelet which the dependency of the method to the nationality of signer has been reduced to its minimal. After pre-processing stage that contains noise reduction and signature image normalisation by size and rotation, a virtual grid is placed on the signature image. Gabor wavelet coefficients with different frequencies and directions are computed on each points of this grid and then fed into a classifier. The shortest weighted distance has been used as the classifier. The weight that is used as the coefficient for computing the shortest distance is based on the distribution of instances in each of signature classes. As it was pointed out earlier, one of the advantages of this system is its capability of signature identification and verification of different nationalities; thus it has been tested on four signature dataset with different nationalities including Iranian, Turkish, South African and Spanish signatures. Experimental results and the comparison of the proposed system with other systems are consistent with desirable outcomes. Despite the use of the simplest method of classification i.e. the nearest neighbour, the proposed algorithm in comparison with other algorithms has very good capabilities. Comparing the results of our system with the accuracy of human\'s identification and verification, it shows that human identification is more accurate but our proposed system has a lower error rate in verification.
Automatic signature verification with chain code using weighted distance and ...eSAT Journals
Abstract The signature forgery can be restricted by either online or offline signature verification techniques. It verifies the signature by
performing a match with the pre-processed signature dynamically by detecting the motion of stylus during signature while on
other hand, offline verifies by performing a match using the two dimensional scanned image of the signature. This paper studies
about the various techniques available in offline signature verification along with their shadows.
Keywords: Signature Verification, Weighted Distance, High Pressure Factor, Normalization, Threshold Value
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
Cataloging Of Sessions in Genuine Traffic by Packet Size Distribution and Ses...IOSR Journals
Abstract: Cataloging traffic keen on precise network applications is vital for application-aware network
organization and it turn into more taxing because modern applications incomprehensible their network
behaviors. Whereas port number-based classifiers work merely for a little renowned application and signaturebased
classifiers are not significant to encrypted packet payloads, researchers are inclined to classify network
traffic rooted in behaviors scrutinized in network applications. In this document, a session level Flood
Cataloging (SLFC) approach is proposed to organize network Floods as a session, which encompasses of
Floods in the equal discussion. SLFC initially classifies flood into the analogous applications by packet size
distribution (PSD) and subsequently faction Floods as sessions by port locality. With PSD, each Flood is
distorted into a set of points in a two-Dimension space and the remoteness among all Flood and the
representatives of preselected applications are calculated. The Flood is predicted as the application having a
least distance. Meanwhile, port locality is accustomed to cluster Floods as sessions since an application often
uses successive port statistics surrounded by a session. If flood of a session are categorized into diverse
applications, an arbitration algorithm is invoked to make the improvement.
Keywords: Flood Cataloging; session grouping; session Cataloging; packet size distribution
Offline Signature Verification Using Local Radon Transform and Support Vector...CSCJournals
In this paper, we propose a new method for signature verification using local Radon Transform. The proposed method uses Radon Transform locally as feature extractor and Support Vector Machine (SVM) as classifier. The main idea of our method is using Radon Transform locally for line segments detection and feature extraction, against using it globally. The advantages of the proposed method are robustness to noise, size invariance and shift invariance. Having used a dataset of 600 signatures from 20 Persian writers, and another dataset of 924 signatures from 22 English writers, our system achieves good results. The experimental results of our method are compared with two other methods. This comparison shows that our method has good performance for signature identification and verification in different cultures.
Offline handwritten signature identification using adaptive window positionin...sipij
The paper presents to address this challenge, we have proposed the use of Adaptive Window Positioning
technique which focuses on not just the meaning of the handwritten signature but also on the individuality
of the writer. This innovative technique divides the handwritten signature into 13 small windows of size nxn
(13x13). This size should be large enough to contain ample information about the style of the author and
small enough to ensure a good identification performance. The process was tested with a GPDS dataset
containing 4870 signature samples from 90 different writers by comparing the robust features of the test
signature with that of the user’s signature using an appropriate classifier. Experimental results reveal that
adaptive window positioning technique proved to be the efficient and reliable method for accurate
signature feature extraction for the identification of offline handwritten signatures .The contribution of this
technique can be used to detect signatures signed under emotional duress
PUBLIC INTEGRIYT AUDITING FOR SHARED DYNAMIC DATA STORAGE UNDER ONTIME GENERA...paperpublications3
Abstract: Nowadays verifying the result of the remote computation plays a crucial role in addressing in issue of trust. The outsourced data collection comes for multiple data sources to diagnose the originator of errors by allotting each data sources a unique secrete key which requires the inner product conformation to be performed under any two parties different keys. The proposed methods outperform AISM technique to minimize the running time. The multi-key setting is given different secrete keys, multiple data sources can be upload their data streams along with their respective verifiable homomorphic tag. The AISM consist of three novel join techniques depending on the ADS availability: (i) Authenticated Indexed Sort Merge Join (AISM), which utilizes a single ADS on the join attribute, (ii) Authenticated Index Merge Join (AIM) that requires an ADS (on the join attribute) for both relations, and (iii) Authenticated Sort Merge Join (ASM), which does not rely on any ADS. The client should allow choosing any portion in the data streams for queries. The communication between the client and server is independent of input size. The inner product evaluation can be performed by any two sources and the result can be verified by using the particular tag.
Keywords: Computation of outsourcing, Data Stream, Multiple Key, Homomorphic encryption.
Title: PUBLIC INTEGRIYT AUDITING FOR SHARED DYNAMIC DATA STORAGE UNDER ONTIME GENERATED MULTIPLE KEYS
Author: C. NISHA MALAR, M. S. BONSHIA BINU
ISSN 2350-1049
International Journal of Recent Research in Interdisciplinary Sciences (IJRRIS)
Paper Publications
An offline signature verification using pixels intensity levelsSalam Shah
Offline signature recognition has great importance in our day to day activities. Researchers are trying to use them as biometric identification in various areas like banks, security systems and for other identification purposes. Fingerprints, iris, thumb impression and face detection based biometrics are successfully used for identification of individuals because of their static nature. However, people’s signatures show variability that makes it difficult to recognize the original signatures correctly and to use them as biometrics. The handwritten signatures have importance in banks for cheque, credit card processing, legal and financial transactions, and the signatures are the main target of fraudulence. To deal with complex signatures, there should be a robust signature verification method in places such as banks that can correctly classify the signatures into genuine or forgery to avoid financial frauds. This paper, presents a pixels intensity level based offline signature verification model for the correct classification of signatures. To achieve the target, three statistical classifiers; Decision Tree (J48), probability based Naïve Bayes (NB tree) and Euclidean distance based k-Nearest Neighbor (IBk), are used.
For comparison of the accuracy rates of offline signatures with online signatures, three classifiers were applied on online signature database and achieved a 99.90% accuracy rate with decision tree (J48), 99.82% with Naïve Bayes Tree and 98.11% with K-Nearest Neighbor (with 10 fold cross validation). The results of offline signatures were 64.97% accuracy rate with decision tree (J48), 76.16% with Naïve Bayes Tree and 91.91% with k-Nearest Neighbor (IBk) (without forgeries). The accuracy rate dropped with the inclusion of forgery signatures as, 55.63% accuracy rate with decision tree (J48), 67.02% with Naïve Bayes Tree and 88.12% (with forgeries).
Offline signature identification using high intensity variations and cross ov...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Efficient Filtering Algorithms for Location- Aware Publish/subscribeIJSRD
Location-based services have been mostly used in many systems. preceding systems uses a pull model or user-initiated model, where a user arrival a query to a server which gives response with location-aware answers. To offer outcomes to users with fast responses, a push model or server-initiated model is flattering an important computing model in the next-generation location-based services. In the push model, subscribers arrive spatio-textual subscriptions to fastening their curiosities, and publishers send spatio-textual messages. It is used for a high-performance location-aware publish/subscribe system to send publishers’ messages to valid subscribers. In this paper, we find the exploration happenstances that start in manipulative a location-aware publish/subscribe system. We recommend an R-tree based index by merging textual descriptions into R-tree nodes. We design efficient filtering algorithms and effective pruning techniques to accomplish high performance. This method can support likewise conjunctive queries and ranking queries.
A survey on Stack Path Identification and Encryption Adopted as Spoofing Defe...IOSR Journals
Abstract: Spoofing attacks are a constant nag in the information world, so many methodologies have been
invented to reduce on its effects but still there is a lot left to be desired. The kind of impact that this attacks have
on Electronic Payment Systems is so detrimental to the economic world given that this systems are viewed as
performance enhancers on payments. This study elaborates two methodologies a combination of StackPi and
Encryption as spoofing defense methodologies. Billions of shillings are lost in this rollercoaster thus giving rise
to a situation that deserves undivided attention and should be researched on. A profound argument on the
methodologies that have been used in this mission to eradicate spoofing attacks, the limitations that they posses
and other methodologies that have been brought in play to succeed them elicits an interesting he strategy of
integrating or combining methodologies and the benefits that this strategy contributes to curbing spoofing
attacks. With that knowledge underhand, it can be justified why the combination of Stack Pi and Encryption is a
recommended solution against spoofing attacks.
Keywords: Electronic Payment Systems, Encryption, Security, Spoofing attacks, Stack Pi
The semantic Web service discovery has been given massive attention within the last few years. With the
increasing number of Web services available on the web, looking for a particular service has become very
difficult, especially with the evolution of the clients’ needs. In this context, various approaches to discover
semantic Web services have been proposed. In this paper, we compare these approaches in order to assess
their maturity and their adaptation to the current domain requirements. The outcome of this comparison
will help us to identify the mechanisms that constitute the strengths of the existing approaches, and
thereafter will serve as guideline to determine the basis for a discovery approach more adapted to the
current context of Web services.
A Comparative Study of Forward and Reverse Engineeringijsrd.com
With the software development at its boom compared to 20 years in the past, software developed in the past may or may not have a well-supported documentation during the software evolution. This may increase the specification gap between the document and the legacy code to make further evolutions and updates. Understanding the legacy code of the underlying decisions made during development is the prime motto, which is very well supported by Reverse Engineering. In this paper, we compare the Transformational Forward engineering, where a stepwise abstraction is obtained with the Transformational Reverse Methodology. While the forward transformation process produces overlap of the decisions, performance is affected. Hence, the use of transformational method of Reverse Engineering which is a backwards Forward Engineering process is suitable. Besides the design recognition obtained is a domain knowledge which can be used in future by the forward engineers.
Cataloging Of Sessions in Genuine Traffic by Packet Size Distribution and Ses...IOSR Journals
Abstract: Cataloging traffic keen on precise network applications is vital for application-aware network
organization and it turn into more taxing because modern applications incomprehensible their network
behaviors. Whereas port number-based classifiers work merely for a little renowned application and signaturebased
classifiers are not significant to encrypted packet payloads, researchers are inclined to classify network
traffic rooted in behaviors scrutinized in network applications. In this document, a session level Flood
Cataloging (SLFC) approach is proposed to organize network Floods as a session, which encompasses of
Floods in the equal discussion. SLFC initially classifies flood into the analogous applications by packet size
distribution (PSD) and subsequently faction Floods as sessions by port locality. With PSD, each Flood is
distorted into a set of points in a two-Dimension space and the remoteness among all Flood and the
representatives of preselected applications are calculated. The Flood is predicted as the application having a
least distance. Meanwhile, port locality is accustomed to cluster Floods as sessions since an application often
uses successive port statistics surrounded by a session. If flood of a session are categorized into diverse
applications, an arbitration algorithm is invoked to make the improvement.
Keywords: Flood Cataloging; session grouping; session Cataloging; packet size distribution
Offline Signature Verification Using Local Radon Transform and Support Vector...CSCJournals
In this paper, we propose a new method for signature verification using local Radon Transform. The proposed method uses Radon Transform locally as feature extractor and Support Vector Machine (SVM) as classifier. The main idea of our method is using Radon Transform locally for line segments detection and feature extraction, against using it globally. The advantages of the proposed method are robustness to noise, size invariance and shift invariance. Having used a dataset of 600 signatures from 20 Persian writers, and another dataset of 924 signatures from 22 English writers, our system achieves good results. The experimental results of our method are compared with two other methods. This comparison shows that our method has good performance for signature identification and verification in different cultures.
Offline handwritten signature identification using adaptive window positionin...sipij
The paper presents to address this challenge, we have proposed the use of Adaptive Window Positioning
technique which focuses on not just the meaning of the handwritten signature but also on the individuality
of the writer. This innovative technique divides the handwritten signature into 13 small windows of size nxn
(13x13). This size should be large enough to contain ample information about the style of the author and
small enough to ensure a good identification performance. The process was tested with a GPDS dataset
containing 4870 signature samples from 90 different writers by comparing the robust features of the test
signature with that of the user’s signature using an appropriate classifier. Experimental results reveal that
adaptive window positioning technique proved to be the efficient and reliable method for accurate
signature feature extraction for the identification of offline handwritten signatures .The contribution of this
technique can be used to detect signatures signed under emotional duress
PUBLIC INTEGRIYT AUDITING FOR SHARED DYNAMIC DATA STORAGE UNDER ONTIME GENERA...paperpublications3
Abstract: Nowadays verifying the result of the remote computation plays a crucial role in addressing in issue of trust. The outsourced data collection comes for multiple data sources to diagnose the originator of errors by allotting each data sources a unique secrete key which requires the inner product conformation to be performed under any two parties different keys. The proposed methods outperform AISM technique to minimize the running time. The multi-key setting is given different secrete keys, multiple data sources can be upload their data streams along with their respective verifiable homomorphic tag. The AISM consist of three novel join techniques depending on the ADS availability: (i) Authenticated Indexed Sort Merge Join (AISM), which utilizes a single ADS on the join attribute, (ii) Authenticated Index Merge Join (AIM) that requires an ADS (on the join attribute) for both relations, and (iii) Authenticated Sort Merge Join (ASM), which does not rely on any ADS. The client should allow choosing any portion in the data streams for queries. The communication between the client and server is independent of input size. The inner product evaluation can be performed by any two sources and the result can be verified by using the particular tag.
Keywords: Computation of outsourcing, Data Stream, Multiple Key, Homomorphic encryption.
Title: PUBLIC INTEGRIYT AUDITING FOR SHARED DYNAMIC DATA STORAGE UNDER ONTIME GENERATED MULTIPLE KEYS
Author: C. NISHA MALAR, M. S. BONSHIA BINU
ISSN 2350-1049
International Journal of Recent Research in Interdisciplinary Sciences (IJRRIS)
Paper Publications
An offline signature verification using pixels intensity levelsSalam Shah
Offline signature recognition has great importance in our day to day activities. Researchers are trying to use them as biometric identification in various areas like banks, security systems and for other identification purposes. Fingerprints, iris, thumb impression and face detection based biometrics are successfully used for identification of individuals because of their static nature. However, people’s signatures show variability that makes it difficult to recognize the original signatures correctly and to use them as biometrics. The handwritten signatures have importance in banks for cheque, credit card processing, legal and financial transactions, and the signatures are the main target of fraudulence. To deal with complex signatures, there should be a robust signature verification method in places such as banks that can correctly classify the signatures into genuine or forgery to avoid financial frauds. This paper, presents a pixels intensity level based offline signature verification model for the correct classification of signatures. To achieve the target, three statistical classifiers; Decision Tree (J48), probability based Naïve Bayes (NB tree) and Euclidean distance based k-Nearest Neighbor (IBk), are used.
For comparison of the accuracy rates of offline signatures with online signatures, three classifiers were applied on online signature database and achieved a 99.90% accuracy rate with decision tree (J48), 99.82% with Naïve Bayes Tree and 98.11% with K-Nearest Neighbor (with 10 fold cross validation). The results of offline signatures were 64.97% accuracy rate with decision tree (J48), 76.16% with Naïve Bayes Tree and 91.91% with k-Nearest Neighbor (IBk) (without forgeries). The accuracy rate dropped with the inclusion of forgery signatures as, 55.63% accuracy rate with decision tree (J48), 67.02% with Naïve Bayes Tree and 88.12% (with forgeries).
Offline signature identification using high intensity variations and cross ov...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Efficient Filtering Algorithms for Location- Aware Publish/subscribeIJSRD
Location-based services have been mostly used in many systems. preceding systems uses a pull model or user-initiated model, where a user arrival a query to a server which gives response with location-aware answers. To offer outcomes to users with fast responses, a push model or server-initiated model is flattering an important computing model in the next-generation location-based services. In the push model, subscribers arrive spatio-textual subscriptions to fastening their curiosities, and publishers send spatio-textual messages. It is used for a high-performance location-aware publish/subscribe system to send publishers’ messages to valid subscribers. In this paper, we find the exploration happenstances that start in manipulative a location-aware publish/subscribe system. We recommend an R-tree based index by merging textual descriptions into R-tree nodes. We design efficient filtering algorithms and effective pruning techniques to accomplish high performance. This method can support likewise conjunctive queries and ranking queries.
A survey on Stack Path Identification and Encryption Adopted as Spoofing Defe...IOSR Journals
Abstract: Spoofing attacks are a constant nag in the information world, so many methodologies have been
invented to reduce on its effects but still there is a lot left to be desired. The kind of impact that this attacks have
on Electronic Payment Systems is so detrimental to the economic world given that this systems are viewed as
performance enhancers on payments. This study elaborates two methodologies a combination of StackPi and
Encryption as spoofing defense methodologies. Billions of shillings are lost in this rollercoaster thus giving rise
to a situation that deserves undivided attention and should be researched on. A profound argument on the
methodologies that have been used in this mission to eradicate spoofing attacks, the limitations that they posses
and other methodologies that have been brought in play to succeed them elicits an interesting he strategy of
integrating or combining methodologies and the benefits that this strategy contributes to curbing spoofing
attacks. With that knowledge underhand, it can be justified why the combination of Stack Pi and Encryption is a
recommended solution against spoofing attacks.
Keywords: Electronic Payment Systems, Encryption, Security, Spoofing attacks, Stack Pi
The semantic Web service discovery has been given massive attention within the last few years. With the
increasing number of Web services available on the web, looking for a particular service has become very
difficult, especially with the evolution of the clients’ needs. In this context, various approaches to discover
semantic Web services have been proposed. In this paper, we compare these approaches in order to assess
their maturity and their adaptation to the current domain requirements. The outcome of this comparison
will help us to identify the mechanisms that constitute the strengths of the existing approaches, and
thereafter will serve as guideline to determine the basis for a discovery approach more adapted to the
current context of Web services.
A Comparative Study of Forward and Reverse Engineeringijsrd.com
With the software development at its boom compared to 20 years in the past, software developed in the past may or may not have a well-supported documentation during the software evolution. This may increase the specification gap between the document and the legacy code to make further evolutions and updates. Understanding the legacy code of the underlying decisions made during development is the prime motto, which is very well supported by Reverse Engineering. In this paper, we compare the Transformational Forward engineering, where a stepwise abstraction is obtained with the Transformational Reverse Methodology. While the forward transformation process produces overlap of the decisions, performance is affected. Hence, the use of transformational method of Reverse Engineering which is a backwards Forward Engineering process is suitable. Besides the design recognition obtained is a domain knowledge which can be used in future by the forward engineers.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
block diagram and signal flow graph representation
ylmaz2016.pdf
1. Accepted Manuscript
Score Level Fusion of Classifiers in Off-line Signature Verification
Mustafa Berkay Yılmaz, Berrin Yanıkoğlu
PII: S1566-2535(16)30003-3
DOI: 10.1016/j.inffus.2016.02.003
Reference: INFFUS 767
To appear in: Information Fusion
Received date: 17 March 2015
Revised date: 19 January 2016
Accepted date: 7 February 2016
Please cite this article as: Mustafa Berkay Yılmaz, Berrin Yanıkoğlu, Score Level Fusion of Classifiers
in Off-line Signature Verification, Information Fusion (2016), doi: 10.1016/j.inffus.2016.02.003
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service
to our customers we are providing this early version of the manuscript. The manuscript will undergo
copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please
note that during the production process errors may be discovered which could affect the content, and
all legal disclaimers that apply to the journal pertain.
2. ACCEPTED MANUSCRIPT
A
C
C
E
P
T
E
D
M
A
N
U
S
C
R
I
P
T
Highlights
• A state-of-art offline signature verification sys-
tem is presented.
• Complementary approaches and features are
fused at multiple levels.
• Useful local binary patterns are selected for of-
fline signature verification.
• In-depth analysis of the effects of alignment for
registering signatures is done.
• A new feature is proposed from SIFT keypoint
alignment.
1
3. ACCEPTED MANUSCRIPT
A
C
C
E
P
T
E
D
M
A
N
U
S
C
R
I
P
T
Score Level Fusion of Classifiers in Off-line Signature Verification
Mustafa Berkay Yılmaz∗
, Berrin Yanıkoğlu
Faculty of Engineering and Natural Sciences, Sabancı University, İstanbul, Turkey
Abstract
Offline signature verification is a task that benefits from matching both the global shape and local details;
as such, it is particularly suitable to a fusion approach. We present a system that uses a score-level fusion
of complementary classifiers that use different local features (histogram of oriented gradients, local binary
patterns and scale invariant feature transform descriptors), where each classifier uses a feature-level fusion
to represent local features at coarse-to-fine levels. For classifiers, two different approaches are investigated,
namely global and user-dependent classifiers. User-dependent classifiers are trained separately for each user,
to learn to differentiate that user’s genuine signatures from other signatures; while a single global classifier
is trained with difference vectors of query and reference signatures of all users in the training set, to learn
the importance of different types of dissimilarities.
The fusion of all classifiers achieves a state-of-the-art performance with 6.97% equal error rate in skilled
forgery tests using the public GPDS-160 signature database. The proposed system does not require skilled
forgeries of the enrolling user, which is essential for real life applications.
Keywords: offline signature; score-level fusion; histogram of oriented gradients; local binary patterns;
scale invariant feature transform; user-dependent/independent classifiers; support vector machines
1. Introduction
Signature verification is used in verifying the
claimed identity of a person through his/her cho-
sen and previously registered signature. The sig-
nature’s widespread acceptance by the public and
niche applications (validating paper documents and
use in banking applications) makes it a desirable
biometric.
Signature is considered to be a behavioral bio-
metric that encodes the ballistic movements of the
signer and as such is difficult to imitate. On the
other hand, compared to physical traits such as fin-
gerprint, iris or face, a signature typically shows
higher intra-class and time variability. Further-
more, as with passwords, a user may choose a sim-
ple signature that is easy to forge.
Depending on the signature acquisition method
used, automatic signature verification systems can
be classified into two groups: online (dynamic) and
∗Corresponding author.
Tel.: +90 216 4839528 Fax: +90 216 4839550
E-mail address: {berkayyilmaz,berrin}@sabanciuniv.edu
offline (static). A static signature image is the only
input to offline systems, while signature trajectory
as a function of time is also available in online sig-
natures. Main difficulties in both tasks are sim-
ple (easy to forge) signatures and variations among
a user’s signatures, but the dynamic information
available in online signatures make the signature
more unique and more difficult to forge.
Research databases define two types of forgeries:
a skilled forgery refers to a forgery which is signed
by a person who has had access to some number
of genuine signatures and practiced them for some
time. In contrast, a random forgery is typically
collected from other people’s real signatures, sim-
ulating the case where the impostor does not even
know the name nor shape of the target signature
and hence uses his/her own for forgery. Random
forgery detection is a much easier task compared
to skilled forgery detection. In this work, as in the
literature, when the term “forgery” is used without
further qualifications, it may refer to either a skilled
forgery or random forgery.
Systems’ evaluation is often done in terms of the
Preprint submitted to Information Fusion February 11, 2016
4. ACCEPTED MANUSCRIPT
A
C
C
E
P
T
E
D
M
A
N
U
S
C
R
I
P
T
Equal Error Rate (EER) which is the point where
the False Accept Rate (FAR) and False Reject Rate
(FRR) are equal and occasionally in terms of the
Distinguishing Error Rate (DER), which is the av-
erage of FAR and FRR.
While the use of the public signature databases
has become the norm in the last years, the
databases do not always have strictly specified pro-
tocols. As a result, many reported accuracies can-
not be directly compared with if they use a different
(sometimes random) subset of the users; or a dif-
ferent number of reference signatures (using more
helps the system as it provides more information);
or a different number of skilled forgeries.
In this work, we present a state-of-the-art offline
signature verification system that uses a fusion of
complementary features, classifiers and preprocess-
ing techniques, with the aim to explore the limits
in signature verification accuracy.
Our main contribution is the comprehensive
study and treatment of different aspects of offline
signature verification, which are fused at the end
to form a state-of-the-art verification system, with
novel aspects including the following:
• We propose an alignment algorithm that im-
proves overall accuracy by more than 2% on
average. While alignment of test images de-
grades overall performance, we have found that
automatic alignment of references is when used
with the global classifiers.
• We improve on the use of the well-known fea-
tures and approaches by novel adaptations.
i) We use coarse-to-fine grids for capturing a
spectrum of global to local features when us-
ing the histogram of oriented gradients (HOG)
and local binary patterns (LBP). ii) We select
the best LBP templates according to term fre-
quencies and combine similar LBP template
histogram bins to obtain a dense histogram.
iii) We use a novel scale invariant feature trans-
form (SIFT) descriptor matching algorithm
that seeks more than one global transforma-
tion in order to allow different transformations
in different parts of a signature.
• We incorporate user-dependent and user-
independent verification concurrently. We ap-
ply a score level fusion to combine classifiers
with complementary feature types, where the
weights are learnt from a separate validation
set.
2. Literature Review
Offline signature verification is a well-researched
topic where many different approaches have been
studied. A series of surveys covering advances in
the field are available [1, 2, 3, 4, 5, 6, 7, 8, 9, 10].
Here, we review some of the recent works, grouped
according to focus areas.
Note that while we give some performance figures
for completeness, many of the reported numbers
are not directly comparable as they are obtained
under different conditions (number of reference sig-
natures, use of skilled signatures etc.). We discuss
this issue in Section 6.6.
Feature extraction
Several different features are used in offline sig-
nature verification, especially local features such as
SIFT descriptors, wavelet features and LBP, among
others. Solar et al. use SIFT descriptors in conjunc-
tion with the Bayes classifier [11]. The performance
is assessed using the GPDS-160 signature dataset,
with a 15.3% DER. However, only a small subset
of all skilled forgeries, and not the full test set, is
used for testing.
Vargas et al. use complex features based on
LBP to perform statistical texture analysis [12].
To extract second order statistical texture features
from the image, another feature called the gray
level co-occurrence matrix method is utilized. The
best combination of features is reported to achieve
an EER of 9.02% on the gray-level GPDS-100
database, using 10 reference signatures.
Different base classifiers
Ferrer et al. [13] have evaluated the effectiveness
of hidden Markov models (HMMs), support vector
machines (SVMs) and the Euclidean distance clas-
sifier on the publicly available GPDS-160 database.
When 12 genuine signatures and 3 skilled forgeries
are used in training the classifiers, the DER rates
are found as 13.35%, 14.27% and 15.94% for the
HMMs, SVM (radial basis function kernel) and the
Euclidean distance classifier, respectively.
A comparison of probabilistic neural networks
(PNN) and K-nearest neighbor (KNN) is done by
Vargas et al. [14]. Genuine and skilled forgery sig-
natures of each subject are divided into two equal
parts, resulting in 12 genuine and 12 skilled forg-
eries in train set and the same amount in the
test set. The results on the gray-level GPDS-160
database are found to be close: the best results are
3
5. ACCEPTED MANUSCRIPT
A
C
C
E
P
T
E
D
M
A
N
U
S
C
R
I
P
T
found to be 12.62% DER with the KNN (k = 3)
and 12.33% DER with the PNN.
Use of classifier combination
There are quite a lot of studies on the effect of
classifier combination in offline signature verifica-
tion. In one of the earlier works, Fierrez-Aguilar
et al. consider the sum rule for combining global
and local image features [15]. One of the experts
in this work is based on a global image analysis
and a statistical distance measure, while the second
one is based on local image analysis with HMMs.
It is shown that local information outperforms the
global analysis in all reported cases. The two pro-
posed systems are also shown to give complemen-
tary recognition information, which is desired in fu-
sion schemes.
Receiver operating characteristic (ROC) curves
are used for classifier combination by Oliveira et
al. [16]. Different fusion strategies to combine the
partial decisions yielded by SVM classifiers are an-
alyzed and the ROC curves produced by different
classifiers are combined using the maximum likeli-
hood analysis. Authors demonstrate that the com-
bined classifier based on the writer-independent ap-
proach reduces the FRR, while keeping FAR at ac-
ceptable levels.
An ensemble of classifiers based on graphomet-
ric features is used to improve the reliability of the
classification by Bertolini et al. [17]. A pool of
base classifiers is first trained using only genuine
signatures and random forgeries; then an ensem-
ble is built using genetic algorithms with two dif-
ferent scenarios. In one, it is assumed that only
genuine signatures and random forgeries are avail-
able to guide the search; while simple and simulated
forgeries also are assumed to be available in the sec-
ond one. Different objective functions are derived
from the ROC curves, for ensemble tuning. A pri-
vate database of 100 writers is utilized for evalua-
tion, considering 5 genuine references for training
and only skilled forgeries for testing. The best re-
sult is found as 11.14% DER using the area under
curve optimization.
Score level combination is examined for offline
signature verification by Prakash and Guru [18].
Classifiers of distance and orientation features are
used individually and in combination. Distance fea-
tures and orientation features individually provide
21.61% and 19.88% DER on the MCYT-75 corpus.
The max fusion rule decreases the DER to 18.26%,
while the average rule decreases the DER to 17.33%
when the weights are fixed empirically.
Hybrid generative discriminative ensemble of
classifiers is proposed by Batista et al. to design
an offline signature verification system from few
references, where the classifier selection process is
performed dynamically [19]. To design the genera-
tive stage, multiple discrete left-to-right HMMs are
trained using a different number of states and code-
book sizes, allowing the system to learn signatures
at different levels of perception. To design the dis-
criminative stage, HMM likelihoods are measured
for each training signature and assembled into fea-
ture vectors that are used to train a diversified pool
of two-class classifiers through a specialized random
subspace method. The most accurate ensembles are
selected based on the K-nearest-oracles algorithm.
The GPDS-160 database is used to evaluate the sys-
tem and 16.81% EER is reported using 12 references
per user.
An offline signature verification system using two
different classifier training approaches is proposed
by Hu and Chen [20]. In the first mode, each SVM
is trained with feature vectors obtained from the
reference signatures of the corresponding user and
random forgeries, while the global Adaboost classi-
fier is trained using genuine and random forgery
signatures of signers that are excluded from the
test set. Global and user-dependent classifiers are
used separately. Combination of all features for
writer-dependent SVMs results in 7.66% EER for
150 randomly selected signers from the gray-level
GPDS-300 dataset, using 10 references. The writer-
independent Adaboost using a combination of all
features results in 9.94% EER for a random 100-
person subset of the same dataset, again using 10
references.
Fierrez-Aguilar et al. give a comprehensive sur-
vey of different fusion strategies in the context of
multimodal biometrics, but the results are also ap-
plicable to single modality combination [21]. Dif-
ferent approaches are categorized as global fusion-
global decision, local fusion-global decision, global
fusion-local decision, and local fusion-local decision.
Adapted fusion and decision methods are also pro-
posed in the same work, using both the global and
local information.
In summary, among the systems that report ac-
curacies on skilled forgery tests on the GPDS-160
database which is also used in this work, the best
DER is reported to be 16.81% with 12 reference
signatures [19]. There are other systems that re-
4
6. ACCEPTED MANUSCRIPT
A
C
C
E
P
T
E
D
M
A
N
U
S
C
R
I
P
T
port lower DERs; however we cannot fully compare
our results with theirs because they report DERs
on slightly non-standard versions of the dataset (a
random subset of users) or with different testing
protocols (e.g. with less skilled forgeries). Previous
works that have reported results on GPDS dataset
are listed in Section 6, following our proposed sys-
tem described in Sections 3-5.
3. Preprocessing
Signature images have variations in terms of pen
thickness, embellishments found in strokes, trans-
lation or relative position of strokes, rotation, scal-
ing even within the genuine signatures of the same
subject. In order to gain invariance to such natu-
ral variations, images should be normalized before
they are further processed.
To compensate for large translation variations
that would result from embellishments, we discard
strokes that are far away from image centroid. This
is done using a distance threshold which is derived
from the standard deviation of the coordinates of
trajectory points (≈ 3σ).
To compensate for pen thickness variations, we
find the upper and lower contours of the signature.
Skeletonisation is another alternative, but it loses
some details, as can be seen in Figure 1. This step is
found to be useful in feature types that use gradient
information (HOG), while features that use texture
information, namely LBP and SIFT, are directly
extracted from the image.
For the effects of rotation, scaling and fine trans-
lation, we use the following alignment procedure.
Each query signature image Q of a training user is
aligned to each reference signature Ri
of that user,
with the best scaling (σ), rotation (θ) and trans-
lation (δ) parameters minimizing the distance be-
tween the query and reference image:
argminσ,θ,δ{kQi
σ,θ,δ − Ri
k}, (1)
where Qi
σ,θ,δ is the transformed version of Q.
We use the `2-norm of the difference between
LBP features. In fact, for faster alignment dur-
ing testing, we apply all possible transformations
to each enrolled reference Ri
, ahead of time and
apply the inverse transformation to Q with param-
eters 1/σ, −θ and −δ. An example reference, query
and aligned query are shown in Figure 2.
We use a small interval to search for best trans-
formation: -2.5 to +2.5 degrees for θ, 0.8 to 1.2 for
(a)
(b)
(c)
Figure 1: Preprocessing (a) Original signature (b) Contour
image (c) Skeleton image.
σ, -10 to +10 pixels for δ. These intervals seem
to be enough as there are no significant alignment
differences in the used database. Larger param-
eter intervals naturally increase the cost of search
and should be handled by more sophisticated meth-
ods such as iterative closest point (ICP) or random
sample consensus (RANSAC) algorithms.
Note that preprocessing that removes individual
characteristics (e.g. signer always signs with a 20
degree slope) may lead to performance degradation
in biometric systems. Furthermore, in alignment, a
sufficiently similar forgery can be made very similar
to a genuine signature using a complex transforma-
tion such as a non-linear scaling. In that case, the
system should either incorporate a cost measure re-
flecting the effort needed for alignment (e.g. similar
to a dynamic time warping algorithm); or it should
only use simple transformations (e.g. scaling, rota-
tion) within a limited range, as done in our work.
In the proposed system, signature alignment is
implemented only on the training phase of the
global classifier, so as to obtain features that are
better aligned with the reference signatures. It is
experimentally observed that the alignment algo-
rithm improves performance as reported in Table 3,
but alignment of test signatures or alignment with
user-dependent classifiers do not improve the over-
all performance.
5
7. ACCEPTED MANUSCRIPT
A
C
C
E
P
T
E
D
M
A
N
U
S
C
R
I
P
T
(a)
(b)
Figure 2: Alignment example: a) not aligned and b) aligned
reference and query.
4. Feature Extraction
Feature extraction step reduces the dimension of
original signature images while preserving and ex-
tracting the important information encoded in the
image to distinguish between genuine and forgery
classes. We utilize a complementary set of fea-
tures that are commonly reported to be success-
ful in the context of offline signature verification,
namely HOG, LBP and SIFT features. After de-
scribing the grids where local features are extracted
in Section 4.1, the features are explained in detail
in Sections 4.2-4.4.
4.1. Grids in Cartesian and Polar Coordinates
In order to develop a system robust to global
shape variations, we extract features from local
zones of the signature image. We evaluated two
different alternatives:
Cartesian grids: First and most common
choice of grids in many works is the rectangular
grids in Cartesian coordinates. The grids may be
overlapping to capture the signature at grid bound-
aries, or non-overlapping. We use overlapping grids
which are found to perform better.
Log-polar grids: Another choice of coordinate
system is the log-polar coordinate system. If the
registration point is selected as the top-left point
of the bounding box and the embellishments are on
the right, then the left parts of the two signatures
align better than the right. With this observation
and at the cost of having some redundant features,
we decide to use multiple registration points (cen-
ter, top-left, top-right and so on) in the polar grid,
to reduce the effect of registration mismatches.
Figure 3: Log-polar grids, origin taken as the image center.
Figure 4: Log-polar grids, origin taken as the top-left corner.
Using multiple fixed registration points is moti-
vated by the fact that there are no reference points
in signatures, unlike face (e.g. eyes) or to some
degree fingerprints (core point). The centroid or
center of mass can be used as a lesser alternative in
registering two signatures. Unfortunately, the loca-
tion of both of these points may show large varia-
tions due especially to large variations in embellish-
ment. Another alternative could be the use of Local
Self-Similarities (LSS) as proposed by Shechtman
and Irani [22] to extract reference points for sig-
nature matching. However, LSS tends to provide
good matching results for texturally rich samples,
which is not the case for binary signatures.
A sample signature divided into regions in log-
polar space is shown in Figure 3 where the origin
is taken as the image center. Same signature with
overlaid log-polar grids where the top-left corner is
used as the origin is shown in Figure 4.
Hierarchical representation: Using a small
number of grids will result in features that are al-
most globally extracted, losing location informa-
tion. In contrast, using a large number of grids
will decrease the system’s ability to allow for small
deformations.
To eliminate the need for searching the ideal grid
resolution, we use a hierarchy of grids in increas-
ing resolution and thus extract coarse to fine fea-
tures. In the top-level, the single grid corresponds
to the full image, while lower levels have increas-
ingly more grid zones. Features extracted from all
levels are then concatenated at the end, to form the
final feature vector.
6
8. ACCEPTED MANUSCRIPT
A
C
C
E
P
T
E
D
M
A
N
U
S
C
R
I
P
T
Feature vectors: Once the grids are fixed, the
feature vectors are obtained by the concatenation
of features extracted from each grid zone. Using a
fixed grid solves the problem of matching uniformly
scaled signatures; however embellishments such as
those at the beginning or end of a signature may
significantly vary in location, orientation and size,
thereby significantly changing the global shape of
a signature. Our expectation is that with the use
of multiple registration points, at least some of the
features will capture similarities between two signa-
tures, even in the presence of such embellishments.
4.2. Histogram of Oriented Gradients
We use the Histogram of Oriented Gradients
(HOG) features introduced by Dalal and Triggs
[23]. The HOG features represent the gradient ori-
entation relative to the dominant orientation. They
have been used before in offline signature verifica-
tion by Zhang [24].
While computing the gradient orientation his-
togram, circular shift normalization is done within
the grid zone, to allow for rotational differences of
the strokes. Specifically, after finding the gradient
orientation at each point, we find the dominant gra-
dient orientation and represent it at the first bin of
the histogram.
HOG features are extracted both in Cartesian
and polar coordinates, separately.
4.3. Local Binary Patterns
Local binary patterns (LBP) form a powerful fea-
ture vector that is proposed to capture texture in
objects [25]. LBP is used in several works and found
to be suitable for offline signature verification as
well [12, 26].
An important drawback of the original LBP
method is the sparsity of the generated histogram;
for example the size of the histogram for a 3 by 3
neighborhood is 256. More importantly, many of
these patterns would never be seen on a small im-
age sample. While there are many LBP variants
proposed in the literature, there are few works for
LBP pattern selection. An example LBP histogram
selection is used in color texture classification by
Porebski et al. [27]. It consists in assigning a score
to each histogram bin, measuring its efficiency in
characterizing the similarity of the textures within
different classes.
There are also many works in literature to of-
fer more compact histograms instead of pattern se-
lection. In the work by Sujatha et al. [28], a
special OR operator is implemented which takes
the Boolean OR function of symmetric neighbor
pairs, claiming to preserve more than 90% of in-
formation content while reducing the LBP code
from 8 bits to 4 bits. Another work to compactly
represent exponentially growing circular neighbor-
hoods is presented by Mäenpää and Pietikäinen
[29]. Large-scale texture patterns are detected by
combining exponentially growing circular neighbor-
hoods with Gaussian low-pass filtering. Then, cel-
lular automata are proposed as a way of compactly
encoding arbitrarily large circular neighborhoods.
Because of the exponential growth of the size of
the histograms, it is not feasible to directly encode
farther neighborhoods with closer neighborhoods.
A novel way to jointly encode multiple scales is
proposed by Qi et al. [30]. When each scale is
encoded into histograms individually, the correla-
tion between different scales is ignored and a lot
of discriminative information is lost. The joint en-
coding strategy can capture the correlation between
different scales and hence depict richer local struc-
tures. Reported results show about 7% accuracy
improvement over baseline multi-scale LBP on tex-
ture recognition problems.
In another work in this direction, Zhang et al.
offer a multi-block LBP method [31]. Inspired from
Haar-like features [32], simple averaging in multiple
rectangular blocks is applied to come up with 3 by
3 rectangular blocks of multiple pixels, each being
treated like a single-pixel to calculate the conven-
tional LBP code. This method is capable of taking
farther neighborhoods into account, while avoiding
the exponential growth in the resulting histogram.
In this work, LBP features are extracted only in
Cartesian coordinates. We utilized different LBP
pattern selection alternatives, as explained below.
4.3.1. LBP-0
We name the conventional LBP method for a 3x3
neighborhood (8-neighbors) as LBP-0. We extract
LBP-0 features both globally (full image) and in
finer grids in the Cartesian coordinates, in a coarse-
to-fine approach. This is the baseline LBP extrac-
tion method that is used in subsequent LBP pattern
selection alternatives given below.
4.3.2. LBP-1
LBP-0 results in a sparse feature vector since
most of the patterns are never seen in a given grid
zone. Also after the hierarchical grid placement,
feature vector gets bigger, although the considered
7
9. ACCEPTED MANUSCRIPT
A
C
C
E
P
T
E
D
M
A
N
U
S
C
R
I
P
T
neighbors are just the 8 neighbors in a 3 × 3 neigh-
borhood.
In the LBP-1 method, we make the system
faster and concurrently improve the performance
by separately considering the patterns formed by
the 4-neighbors ({South, North, West, East}) and
the diagonal neighbors ({North-East, North-West,
South-East, South-West}), resulting in a feature
vector of size 2 × 24
= 32. This circularly sym-
metric grouping is inspired by the work of Ojala et
al. [33].
When computing the counts for the 4-neighbors’
patterns, we implicitly combine the counts of all dif-
ferent combinations of diagonal-neighbors as don’t-
care patterns, and vice versa. This is illustrated
in Figure 5 where the gray pixels and all of their
possible 16 combinations are combined into the his-
togram of each pattern formed by the 4-neighbors
(the black pixels).
Figure 5: Each 4-neighbor implicitly combines all combina-
tions of diagonal neighbors in LBP-1 method.
4.3.3. LBP-2
In this method we take all patterns in LBP-0 and
select the best patterns explicitly. The selection cri-
terion is based on the difference of the term frequen-
cies (∆TF) of each pattern among the genuine and
forgery samples. The aim is to select those patterns
that occur more among the genuine signatures, as
well as those that appear among the forgeries (e.g.
patterns resulting from hesitation).
To do this, we first compute the histogram H of
all the LBP-0 codes over the whole image, sepa-
rately for genuine signatures and skilled forgeries,
using the training set (GPDS 161-300). Then we
compute the ∆TF value for each LBP pattern p and
select the first 32 patterns with the highest |∆TF|
values:
∆TF(p) = HGenuines(p) − HF orgeries(p). (2)
Alternative pattern selections: To explore
the effect of selecting the best patterns according to
the delta term frequency criterion, we generate two
other features: i) by selecting the worst 32 patterns
with the smallest |∆TF| values and ii) by selecting
the next best 32 patterns, denoted as LBP-2min
and LBP-2n32, respectively.
4.3.4. LBP-2F
Detecting LBP patterns on a larger window than
3x3 can be useful, but the number of patterns grows
exponentially with the size of the window. For in-
stance, there are 224
patterns to be considered in a
5x5 window. In order to take into account larger
neighborhoods, we decided to consider only the bor-
derline pixels of the considered window, in this vari-
ation. For instance in a 5x5 window, only the pat-
terns constructed by pixels that are 2-Chebyshev
distance to the center are considered, ignoring the
variations in the 3x3 center, as shown in Figure 6.
Figure 6: Neighbors with Chebyshev distance of 2 are shown
in black, center pixel shown in gray.
Reducing the feature size: Even only with
the borderline pixels, there are 216
different pat-
terns for a 5x5 window, which is difficult to deal
with in practice. In the literature, the generalized
LBP operator is derived on the basis of a circu-
larly symmetric neighbor set of a defined number
of members, on a circle of radius R that is designed
to deal with this situation [33]. Neighbors in each
of these circularly symmetric groups are completely
independent and at the end all of the neighbors are
covered by different groups, forming a basis. This
LBP operator is previously applied to offline signa-
ture verification by Vargas et al. [12].
Similar to this work, we sample the pixels of 2-
Chebyshev distance resulting in two groups of 8 pix-
els, as shown in Figure 7 for a 5x5 window. Then,
(a) (b)
Figure 7: Neighbors with Chebyshev distance 2 are sampled
in 2 groups, each group having 8 pixels.
8
10. ACCEPTED MANUSCRIPT
A
C
C
E
P
T
E
D
M
A
N
U
S
C
R
I
P
T
we select the best patterns for each group, as ex-
plained in Section 4.3.3. Pre-selected specific paths
of Chebyshev distance 2 are also used in offline sig-
nature verification by Barkoula et al. [34].
Alternative pattern selections: To explore
the effect of selecting the best patterns according
to term frequency criterion, alternative pattern se-
lections LBP-2Fmin and LBP-2Fn32 are again con-
sidered, as in Section 4.3.3.
Multiple Chebyshev distances: We repeat
this process for Chebyshev distances of one, two
and three, where there are one, two or three neigh-
bor groups, respectively. Building individual classi-
fiers for each neighbor group, we obtain a total of 6
classifiers, each one being an expert on completely
independent information. We then use the aver-
age of these 6 classifiers’ output, to obtain the final
output for the LBP-2F method. Hence the LBP-2F
method implicitly includes classifier combination.
4.4. Scale Invariant Feature Transform
The Scale Invariant Feature Transform (SIFT) is
a popular and successful feature extraction method
used in computer vision for finding the correspon-
dence between different views of an object or detect-
ing and recognizing objects/scenes. The SIFT de-
scriptors are extracted around local interest points
and serve as distinctive, scale and rotation invariant
features [35].
While SIFT features are used for offline signa-
ture verification previously [11, 36], we use both the
conventional SIFT matching approach and a novel
one that is found to be more suitable for offline
signature verification. In the second approach, we
discretize the SIFT transformations and compute
a feature vector in the form of a transformation
histogram showing the most voted histograms by
matching points. The motivation is to allow for
different transformations in different parts of the
signature, to allow for non-linear deformations.
4.4.1. Conventional approach
In the conventional SIFT keypoint matching, a
common rigid transformation can be found by vot-
ing for the most commonly occurring rigid trans-
formation between matching point pairs. Example
matches between two signature pairs are shown in
Figure 8, where corresponding matches of the most
popular transformation bin are separately shown.
In our first approach, we discretize the SIFT
transformations indicated by different matching
(a)
(b)
(c)
(d)
Figure 8: Example SIFT matches in (a) and (c), with the
corresponding most voted transformations in (b) and (d).
points and analyze the number of votes in the most
popular transformation for deciding whether the
signature is genuine or forgery.
The transformation parameters between two
matching points are found as follows. Assume that
x1, y1 and x2, y2 are coordinates of two SIFT de-
scriptors to be matched. To find the transfor-
mation, we first find the normalized coordinates
xn1 = x1/w1, yn1 = y1/h1, xn2 = x2/w2, yn2 =
y2/h2 where wi and hi are the width and height
of image i. We find the translation in two dimen-
sions using xd = xn1 − xn2 and yd = yn1 − yn2.
We then find orientations of matches using θ =
arctan((y1 − y2)/(x1 − x2)). We quantize θ val-
ues into 8 bins, xd values into 4 bins, yd values into
4 bins; in total 128 bins.
9
11. ACCEPTED MANUSCRIPT
A
C
C
E
P
T
E
D
M
A
N
U
S
C
R
I
P
T
Since the number of matches will be higher for
longer and more complex signatures, we have to
normalize the match counts so that longer sig-
natures are not easily matched due to the sheer
number of matches. We investigate two different
normalization methods. In the first normalization
method called SIFT-MP, we simply consider the
ratio of matches in the most voted transformation
(Nh) to the total number of matches. In the second
normalization method called SIFT-MR, we normal-
ize Nh with the average number of matches among
reference pairs (NR
h ). Thus, SIFT-MR considers
the ratio of matched points in the most voted trans-
form, to what is observed among reference pairs.
4.4.2. Handling non-linear alignments
Signatures of a person often display large non-
linear variations, especially with signatures having
extensive embellishments. With these signatures,
finding one global transformation to align them is
not sufficient. To handle such situations, we devel-
oped a novel method which we refer as SIFT-TH. In
this method, we use the number of votes for all the
transformation bins as a feature vector, resulting
in a feature vector of the same size as the num-
ber of the considered transformations (in our case
8 × 4 × 4 = 128).
This novel representation is intended to address
signatures where two parts of a signature may un-
dergo different transformations. For instance for
the genuine signatures of a person who signs his
signature without any variability in the main body
but a lot of variation in the embellishing stroke,
the transformation histogram may show a consis-
tent high match in one bin (no rotation and no
translation) and a smaller match in one of the other
bins.
Finally, we train USVM classifiers described in
Section 5.2, where positive examples are collected
by matching reference signature pairs and negative
examples are collected by matching references to
random forgeries.
For testing any of the three methods described
above (SIFT-MP, SIFT-MR and SIFT-TH), we
match the query to all the references of the user
and use the median match score (normalized match
counts or match scores) as the final SIFT score for
the query.
Results given in Table 1 show the performance
of three methods on the GPDS-160 dataset, using
5 genuine signatures as reference. As seen in these
results, normalization based on reference set statis-
tics (SIFT-MR) is the better normalization method,
but the novel SIFT-TH approach outperforms the
others.
Method Rotation
(θ) bins
Translation
(x,y) bins
EER
SIFT-MP 8 4 29.12%
SIFT-MR 8 4 25.84%
SIFT-TH 8 4 24.09%
Table 1: SIFT results with different usages.
5. Classification
One can use both user-based and global classi-
fiers in offline signature verification. Because user-
based classifiers are trained to discriminate just a
single person against others, they are reported to
be more successful [37], as long as there are enough
references for each subject, to train the classifiers.
Combining user-dependent and global verifica-
tion systems have been investigated before. For
instance, Eskander et al. propose a hybrid offline
signature verification system, consisting of writer-
independent and writer-dependent classifiers that
are used selectively, instead of concurrently [38].
In our system, classification is performed using
SVMs, where two different approaches are inves-
tigated, namely global and user-dependent SVMs.
The SVM classifier is found to be very successful in
signature verification literature [39, 13, 40, 41, 42,
20, 26], in addition to reports of their good gener-
alization abilities in general.
Both the user-dependent and global classifiers are
trained with RBF kernels and parameters are opti-
mized with grid search on a separate validation set
(users 161-300 from the GPDS-300 dataset, who
are not in the test set). The number of genuine
signatures used as reference is set to 5 or 12, in-
line with most previous research. For global SVMs
(GSVMs), half of the users in the validation set is
used for training and the other half is used for test-
ing.
5.1. Global SVMs (GSVM)
A global (also called writer-independent or user-
independent) signature verification system learns to
differentiate between two types of classes: genuine
and forgery. The global classifier (GSVM) in this
work is a user-independent classifier that is trained
10
12. ACCEPTED MANUSCRIPT
A
C
C
E
P
T
E
D
M
A
N
U
S
C
R
I
P
T
to separate difference vectors obtained from gen-
uine signatures of a user, from those obtained from
(skilled) forgery signatures of the same user.
To obtain the difference vectors, features ob-
tained from a query signature (genuine or forgery)
are compared to the features obtained from each
of the reference signatures of the claimed identity.
The resulting difference vectors are then normalized
so that each element of this vector represents how
many standard deviations away the query feature is
from the reference feature, using the standard de-
viation measured among the difference vectors of a
given query and references.
More precisely, let {R1
, R2
, ..., RN
} be the fea-
ture vectors extracted from the reference signatures
of a particular user and let Q = [q1 . . . qM ] be
the feature vector extracted from a test signature,
where N is the number of reference signatures and
M is the number of features. Then, we compute
N difference vectors for each query, where the ith
difference vector is computed as:
Di
= Q − Ri
=
(q1 − Ri
1)/(σ1 + τ)
(q2 − Ri
2)/(σ2 + τ)
...
(qM − Ri
M )/(σM + τ)
(3)
where σm is the standard deviation of the m-th di-
mension of the difference vectors between query and
claimed user’s reference signatures i = 1...N. It is
calculated as:
σm =
v
u
u
t 1
N
N
X
i=1
((qm − Ri
m) − µm)2 (4)
where µm = 1
N
N
P
i=1
(qm − Ri
m) and τ is a small con-
stant to eliminate division by zero. By the help of
this normalization, the difference vector represents
how many standard deviations away the query fea-
ture is from the reference feature. It is possible to
evaluate a given query even with one reference of
the claimed user, which is especially advantageous
in real life applications. Because we have N differ-
ence vectors Q − Ri
obtained from each reference,
we have N classifier scores for each query. To get a
final classifier score, we calculate the average score.
Note here that the SVM is learning which dif-
ferences in the feature vector may be within the
normal variations of a user and which differences in-
dicate forgeries. For instance, using global features
such as size, pixel density, width-to-height ratio,
the SVM learns how much variation in a particu-
lar feature matters. In the case of local features,
the SVM can learn how to weight differences in the
center compared to the periphery of the signature,
for instance.
We devote users 161-300 from the GPDS-300
dataset for the GSVM training, such that users of
train and test sets do not overlap. Actually the
users devoted for training could be selected from a
completely different database if appropriate image
normalization is applied. In testing, references of
test users are just used to calculate difference vec-
tors; while their remaining signatures are used as
queries.
5.2. User-dependent SVMs (USVM)
In the second approach, we train one user-
dependent SVM per user, with the expectation that
the user-dependent SVM can better learn to differ-
entiate genuine signatures of a person from forg-
eries. Each SVM is trained with the raw feature
vectors obtained from the reference signatures of
the corresponding user and those obtained by ran-
dom forgeries (other users’ reference signatures re-
served for training). Note that in this case, we do
not need a separate group of users for training as
opposed to GSVM, since we only use genuine sig-
natures of others.
5.3. Classifier Combination
In general, classifiers may differ by changing the
training set, input features and parameters of the
classifier. In many problems, score level combina-
tion of the classifiers using different representations
is reported to perform better than feature level
combination.
We combine the classifiers of the features intro-
duced in Section 4 for user-dependent and user-
independent (global) cases. Specifically, for a
single query signature, there are 7 score out-
puts obtained: HOG-Cartesian USVM, HOG-
Polar USVM, SIFT USVM, LBP-Cartesian USVM,
HOG-Cartesian GSVM, HOG-Polar GSVM, LBP-
Cartesian GSVM. A simple score level linear com-
bination is used to obtain the final score where the
weight set is found empirically from a validation set.
We have only used well-known fusion method of av-
eraging (with fixed and learned weight sets). How-
ever, the important contribution here is to show
that we can obtain error rates that are 8-15% points
11
13. ACCEPTED MANUSCRIPT
A
C
C
E
P
T
E
D
M
A
N
U
S
C
R
I
P
T
lower compared to the best single feature classifier
(using 5 references), either with USVMs or GSVM.
6. Experimental Evaluation
6.1. Database and Methodology
The GPDS-300, a publicly available subset of the
GPDS-960 dataset [43] is used to train and evalu-
ate the system performance. We perform evalua-
tions on the GPDS-160 subset for testing in order
to be compatible with most of the recent works,
while the remaining 140 subjects are used for train-
ing (GSVMs).
In order to obtain results that are comparable to
those reported in the literature, we train classifiers
using 5 or 12 reference signatures. However, we
note that most real life applications involve the use
of 5 or fewer reference signatures.
During testing, we use all genuine signatures of
a user except those that are used as reference; thus
resulting in 12 and 19 genuine tests per user for the
cases of 12 and 5 reference signatures, respectively.
Since we do not use any skilled forgeries of test users
in training, all skilled forgeries of a user (30 of them)
are used in testing.
We do not use any random forgeries during test-
ing as they are generally much easier to detect
and using them together with skilled forgeries ba-
sically amounts to averaging the performance ob-
tained with skilled and random forgeries.
All results in Tables 2-6 are reported as EER re-
sults on skilled forgery tests, while we report DER
results in Table 8 in order to compare our work with
previous works that do not provide EER results.
6.2. Results
The USVM results are given in Table 2, where all
classifiers use a hierarchy of coarse-to-fine Cartesian
grids, except for the LBP-0 global method where
the features are obtained globally. Considering
these results, we see that LBP features are better
than HOG and SIFT-TH features. Furthermore,
for farther neighborhood cases, the LBP-2Fmin per-
forms the best; while counter-intuitive at first, this
shows the discriminative power of rare LBP pat-
terns exploited with the power of information fu-
sion. We analyze the statistical significance of these
results in Section 6.5.
The GSVM results are given in Table 3. Con-
sidering these results, we see that GSVMs obtain
higher accuracies with HOG features compared to
Features (Grid Hierarchy) 12 ref. 5 ref.
SIFT-TH 20.51% 24.09%
HOG-Cartesian 19.54% 21.36%
HOG-Polar 16.39% 18.26%
LBP-0 (global) 21.30% 24.18%
LBP-0 15.32% 17.54%
LBP-1 15.01% 16.94%
LBP-2 15.10% 17.43%
LBP-2n32 15.78% 18.16%
LBP-2min 21.68% 22.43%
LBP-2F 11.30% 12.77%
LBP-2Fn32 11.16% 11.20%
LBP-2Fmin 9.64% 11.01%
LBP-2F (fusion) 8.75% 9.13%
Table 2: USVM results (EER) with different features.
Features (Grid Hierarchy) 12 ref. 5 ref.
HOG-Polar 24.00% 25.41%
HOG-Polar (aligned) 22.35% 23.87%
HOG-Cartesian 20.83% 26.13%
HOG-Cartesian (aligned) 20.55% 23.49%
LBP-2F (5x5, first group) 30.25% 30.32%
LBP-2F (5x5, first group,
aligned)
26.96% 26.84%
Table 3: GSVM results (EER) with different features.
LBP, in contrast to USVMs. However, the accuracy
results are significantly lower compared to the best
results obtained with USVMs. This is not very sur-
prising as the USVMs are specifically trained for
each user, while GSVMs only know about global
(across all users) variations in each dimension. On
the other hand, GSVM improves the overall per-
formance slightly when used in conjunction with
USVMs, as shown later.
The best grid method is not conclusive: the log-
polar grid is better with USVMs while the Carte-
sian grid is better with the GSVM.
Finally, classifier combination results are pro-
vided in Table 4. As found in many studies in
different fields, we also find that score-level com-
bination of classifiers (using a weighted sum-rule)
improves overall accuracy. Specifically, using the
best combination with weights that are learnt from
a separate validation set, we obtain very low equal
error rates: 6.97% and 7.98% EER using 12 and 5
references, respectively. Precise weights are found
by more sensitive weight learning such that the dis-
crete intervals used for weight searching are kept
12
14. ACCEPTED MANUSCRIPT
A
C
C
E
P
T
E
D
M
A
N
U
S
C
R
I
P
T
smaller.
While GSVMs contribute slightly to the USVM
combination, they can have a significant role in ap-
plications where each user only have a few refer-
ence signatures or when re-training the system is
not possible.
The ROC curve for the best combined system
is shown in Figure 9 both for 5 references and 12
references. Log-scaled values for x-axis are used for
all ROC curves for easier analysis.
Method 12 ref. 5 ref.
All GSVMs, not aligned 17.14% 20.60%
All GSVMs, aligned 18.32% 20.88%
All USVMs (both HOGs,
LBP-2F fusion, SIFT-TH)
7.84% 8.57%
All USVMs and all GSVMs 7.32% 8.30%
All USVMs and all GSVMs
with precise weights
6.97% 7.98%
Table 4: Equal error rates for classifier combination.
6.3. Effect of Varying Reference Sets
As a sensitivity analysis of the selection of refer-
ence signatures, we ran 5-fold cross validation tests,
with results given in Table 5. In each fold, we se-
lected a different subset of 12 genuine signatures as
reference and used the remaining ones as genuine
test samples. In all our other experiments, the first
N genuine signatures are chosen as the reference
set, where N is the number of references.
The mean EER values reported with varying ref-
erence sets are close to the results given with the
first 5 or 12 genuine signatures used as reference.
Furthermore, the relative performances of the three
methods remain unchanged. On the other hand,
there is a relatively high standard deviation, in-
dicating that not only the number, but also the
selection of reference signatures matters in overall
accuracy.
Method 5-fold EER
HOG-Cartesian GSVM,
aligned
21.38% ± 2.13%
HOG-Polar USVM 16.33% ± 2.11%
LBP-2 USVM 15.40% ± 2.29%
Table 5: Effect of varying reference sets.
Figure 9: ROC curve for the best system given in Table 4.
6.4. Comparison with Feature Level Combination
Another common choice for information fusion
is feature level combination. We have compared
feature level fusion with score level fusion, for sev-
eral pairs of features and corresponding classifiers,
with results given in Table 6. The results are ob-
tained with 5 reference signatures on the GPDS-160
dataset and the weights for score level fusion are
learnt from a separate set (GPDS 161-300). ROC
curves are provided in Figure 10 for detailed anal-
ysis of this comparison.
We observe that score level fusion outperforms
feature level fusion. Furthermore, besides the per-
formance advantage, score level fusion is easier to
implement, requiring only a simple optimization of
the weights and can be parallelized.
Method EER
HOG-Cartesian (1) 21.36%
HOG-Polar (2) 18.26%
LBP-2F (3x3) (3) 17.43%
LBP-2F (5x5, first group) (4) 13.47%
LBP-2F (5x5, second group) (5) 13.76%
Feature fusion of (1) and (3) 17.43%
Score fusion of (1) and (3) 16.05%
Feature fusion of (2) and (3) 18.32%
Score fusion of (2) and (3) 14.42%
Feature fusion of (4) and (5) 13.61%
Score fusion of (4) and (5), equal
weights
13.54%
Table 6: Comparison of score level fusion and feature level
fusion results.
13
15. ACCEPTED MANUSCRIPT
A
C
C
E
P
T
E
D
M
A
N
U
S
C
R
I
P
T
6.5. Statistical Analysis
We report 95% confidence intervals for the main
methods considered in this work, in Table 7. To
obtain confidence intervals, we first use a Monte
Carlo simulation of the balanced repeated replicates
method of Micheals and Boult [44], as described in
[45]. This method samples the result of a single
query from each user in one trial and computes the
confidence interval of the equal error rate observed
in 1000 such trials, similar to [46].
In addition, we performed statistical significance
tests by considering the results of paired trials that
were used to obtain the confidence intervals. These
analyses indicate that LBP-2F (fusion) method is
significantly better compared to LBP-0 and LBP-1
methods; and the USVMs are significantly better
compared to GSVMs (p ≤ .05, two-tailed).
When we consider 90% confidence intervals, the
LBP-2Fmin results are also found to be significantly
better compared to LBP-0 and LBP-1 results (p ≤
.10, two-tailed).
Method Confidence
Intervals
LBP-0 15.63 ± 5.00%
LBP-1 15.00 ± 5.62%
LBP-2 15.31 ± 5.00%
LBP-2Fmin 9.69 ± 4.38%
LBP-2F (fusion) 9.07 ± 4.07%
All GSVMs (not aligned) 17.50 ± 5.63%
All GSVMs (aligned) 18.44 ± 5.94%
All USVMs 7.81 ± 4.06%
All USVMs and all GSVMs
(coarse weights)
7.35 ± 3.91%
All USVMs and all GSVMs
(precise weights)
6.88 ± 3.75%
Feature fusion (2) and (3) 17.50 ± 5.63%
Score fusion (2) and (3) 14.07 ± 5.31%
Table 7: Confidence intervals for some of the main methods.
6.6. Comparison with State-of-the-Art Results
For evaluation of our results, we give recent re-
sults on the GPDS database in Table 8. We note
that most of these results are not directly compara-
ble, as performance depend heavily on several fac-
tors: the used database (GPDS-100, GPDS-160, or
some random subset of the full database denoted
with subscript Rand); the number of reference sig-
natures used in training (more references normally
(a)
(b)
(c)
Figure 10: ROC curves for score and feature level combina-
tions: a) fusion of methods (1) and (3); b) fusion of (2) and
(3); c) fusion of (4) and (5), defined in Table 6.
14
16. ACCEPTED MANUSCRIPT
A
C
C
E
P
T
E
D
M
A
N
U
S
C
R
I
P
T
help with verification); the use of skilled forgeries in
training (some systems use none, while others may
use a varying number); and the image type (binary
or gray-level). Nonetheless, we give results from the
literature for context.
This comparison set includes systems that use
skilled forgeries in training [13, 14, 47], but we do
not use any skilled forgeries in training as the use of
skilled forgeries is not really suitable for real life ap-
plications. Also some systems use smaller subsets
of the GPDS database [48, 47, 49, 12, 42, 20], while
we use the whole GPDS-160 database. Finally,
some systems utilize the gray-level version of the
database and benefit from the richer information
present in the gray-level image [14, 47, 49, 12, 20],
but our work is done on the binary version of the
database.
In summary, our experimental evaluation proto-
col considers the binary GPDS-160 with 12 genuine
signatures as reference set and without skilled forg-
eries in training. The best previous result with this
configuration is reported as 16.81% DER [19]. Our
fusion results that are directly comparable to this
system are 6.97% and 7.98% EERs with 12 and 5
genuine references, respectively.
7. Summary and Conclusions
We present a state-of-the-art automatic offline
signature verification system based on HOG and
LBP features extracted from local grid zones. For
either of the representations, features obtained from
grid zones are concatenated in a coarse-to-fine hier-
archy to form the final feature vector. Two different
types of SVM classifiers are trained to perform ver-
ification, namely global and user-dependent SVMs.
We also evaluate the fusion of classifiers and show
that fusion improves the overall verification perfor-
mance and that score level fusion outperforms fea-
ture level fusion.
Using the definitions in [21], our system can be
defined as an adapted classification, global fusion
and global decision system. It is experimentally
shown that when enough training data (at least 5
genuine signatures and many random forgeries as
the reference set) is available, user-based classifiers
are much more successful, as previously observed in
literature, but user-independent classifiers comple-
ment them to improve performance.
Obtained results are in par or better compared
to those reported in the literature for the GPDS
database without using any skilled forgeries in
training.
8. Future Work
While state-of-the-art in offline signature veri-
fication achieves around 10-15% EER in various
databases, the performance of these systems would
be expected to be significantly worse with signa-
tures collected in real life scenarios. In the future,
systems research needs to concentrate on increasing
the robustness of systems towards larger variations
encountered in real life (e.g. signatures signed in
smaller spaces, or in a hurry, or on documents with
interfering lines).
Another issue is to allow the system work well
with less number of references, such as three as is
the case in many banking operations or even with
one reference. Importance of user-based score nor-
malization becomes significant with such extreme
cases. Developing a simpler and better score nor-
malization method is a part of our future work.
Measuring the complexity level of a signature can
also help with many issues such as user-based score
normalization or enforcing the strength of the sig-
nature.
15
17. ACCEPTED MANUSCRIPT
A
C
C
E
P
T
E
D
M
A
N
U
S
C
R
I
P
T
Reference Method GPDS Set Training Testing DER
Vargas et.
al. [49]
Wavelets GPDS-100 gray
level
5 gen. + rand. forg. 19 gen. and 24 skl.
forg.
14.22%
Vargas et.
al. [12]
Texture
features
GPDS-100 gray
level
10 gen. + rand. forg. 14 gen. and 24 skl.
forg.
9.02%
Hu and
Chen [20]
Writer
indep.,
Adaboost
GPDSrand − 100
gray level
10 gen. + rand. forg. 14 gen. + 30 skl.
forg.
9.94%
Hu and
Chen [20]
Writer
dependent,
SVM
GPDSrand − 150
gray level
10 gen. + rand. forg. 14 gen. + 30 skl.
forg.
7.66%
Parodi et.
al. [50]
Graphomt.
features
GPDSrand − 130 13 gen. + rand. forg. 11 gen., 24 skl.
forg., and simple
forg. (not de-
tailed)
4.21%
Bharathi
and Shekar
[42]
Chain code
histogram
GPDS-100 12 gen. + rand. forg. 12 gen. + 30 skl.
forg.
11.4%
Nguyen et.
al. [40]
SVM GPDS-160 12 gen. + rand. forg. 12 gen. + 30 skl.
forg.
20.07%
Nguyen et.
al. [51]
Global fea-
tures
GPDS-160 12 gen. + rand. forg. 12 gen. + 30 skl.
forg.
17.25%
Batista et.
al. [19]
EoCs GPDS-160 12 gen. + rand. forg. 12 gen. + 30 skl.
forg.
16.81%
Eskander
et. al. [38]
Writer
indep.
classifier
GPDS-160 12 gen. + rand. forg. 12 gen. + 30 skl.
forg.
26.73%
Eskander
et. al. [38]
Writer
dependent
classifier
GPDS-160 12 gen. + rand. forg. 12 gen. + 30 skl.
forg.
22.71%
Proposed SVM GPDS-160 5 gen. + rand. forg. 19 gen. + 30 skl.
forg.
7.98%
Proposed SVM GPDS-160 12 gen. + rand. forg. 12 gen. + 30 skl.
forg.
6.97%
Table 8: Summary of recent results on GPDS dataset.
16
18. ACCEPTED MANUSCRIPT
A
C
C
E
P
T
E
D
M
A
N
U
S
C
R
I
P
T
References
[1] R. Plamondon, G. Lorette, Automatic signature verifi-
cation and writer identification – the state of the art,
Pattern Recognition 22 (2) (1989) 107–131.
[2] R. Sabourin, R. Plamondon, G. Lorette, Off-line iden-
tification with handwritten signature images: survey
and perspectives, Structured Document Image Analy-
sis (1992) 219–234.
[3] F. Leclerc, R. Plamondon, Automatic signature verifi-
cation: the state of the art, 1989–1993, International
Journal of Pattern Recognition and Artificial Intelli-
gence 8 (3) (1994) 643–660.
[4] R. Plamondon, S. N. Srihari, On-line and off-line hand-
writing recognition: a comprehensive survey, IEEE
Trans. on Pattern Analysis and Machine Intelligence
22 (1) (2000) 63–84.
[5] J. Coetzer, Off-line signature verification, Ph.D. thesis,
University of Stellenbosch, South Africa (2005).
[6] D. Impedovo, G. Pirlo, Automatic signature verifica-
tion: The state of the art, Trans. Sys. Man Cyber Part
C 38 (5) (2008) 609–635.
[7] M. S. Arya, V. S. Inamdar, A preliminary study on
various off-line hand written signature verification ap-
proaches, International Journal of Computer Applica-
tions 1 (9) (2010) 55–60, published By Foundation of
Computer Science.
[8] S. Pal, M. Blumenstein, U. Pal, Off-line signature veri-
fication systems: a survey, in: Proceedings of the Inter-
national Conference & Workshop on Emerging Trends
in Technology, ICWET ’11, ACM, New York, NY, USA,
2011, pp. 652–657.
[9] V. Bhosale, A. Karwankar, Automatic static signature
verification systems: A review, International Journal Of
Computational Engineering Research 3 (2) (2013) 8–12.
[10] D. Impedovo, G. Pirlo, M. Russo, Recent advances in
offline signature identification, in: Frontiers in Hand-
writing Recognition (ICFHR), 2014 14th International
Conference on, IEEE, 2014, pp. 639–642.
[11] J. Ruiz-Del-Solar, C. Devia, P. Loncomilla, F. Concha,
Offline signature verification using local interest points
and descriptors, in: CIARP ’08: Proceedings of the
13th Iberoamerican congress on Pattern Recognition,
Springer-Verlag, Berlin, Heidelberg, 2008, pp. 22–29.
[12] J. F. Vargas, M. A. Ferrer, C. M. Travieso, J. B. Alonso,
Off-line signature verification based on grey level in-
formation using texture features, Pattern Recogn. 44
(2011) 375–385.
[13] M. A. Ferrer, J. B. Alonso, C. M. Travieso, Offline geo-
metric parameters for automatic signature verification
using fixed-point arithmetic, IEEE Trans. Pattern Anal.
Mach. Intell. 27 (6) (2005) 993–997.
[14] F. J. Vargas, M. A. Ferrer, C. M. Travieso, J. B. Alonso,
Off-line signature verification based on high pressure
polar distribution, in: Proceedings of the 11th Interna-
tional Conference on Frontiers in Handwriting Recog-
nition, ICFHR 2008, 2008, pp. 373–378.
[15] J. Fierrez-Aguilar, N. Alonso-Hermira, G. Moreno-
Marquez, J. Ortega-Garcia, An off-line signature ver-
ification system based on fusion of local and global in-
formation, in: D. Maltoni, A. Jain (Eds.), Biometric
Authentication, Vol. 3087 of Lecture Notes in Computer
Science, Springer Berlin Heidelberg, 2004, pp. 295–306.
[16] L. S. Oliveira, E. Justino, R. Sabourin, F. Bortolozzi,
Combining classifiers in the ROC-space for off-line sig-
nature verification, Journal of Universal Computer Sci-
ence 14 (2) (2008) 237–251.
[17] D. Bertolini, L. S. Oliveira, E. Justino, R. Sabourin,
Reducing forgeries in writer-independent off-line signa-
ture verification through ensemble of classifiers, Pattern
Recognition 43 (2010) 387–396.
[18] H. N. Prakash, D. S. Guru, Offline signature verifica-
tion: An approach based on score level fusion, Interna-
tional Journal of Computer Applications 1 (18) (2010)
52–58.
[19] L. Batista, E. Granger, R. Sabourin, Dynamic selec-
tion of generative-discriminative ensembles for off-line
signature verification, Pattern Recogn. 45 (4) (2012)
1326–1340.
[20] J. Hu, Y. Chen, Offline signature verification using real
adaboost classifier combination of pseudo-dynamic fea-
tures, in: Document Analysis and Recognition (IC-
DAR), 2013 12th International Conference on, 2013, pp.
1345–1349.
[21] J. Fierrez-Aguilar, D. Garcia-Romero, J. Ortega-
Garcia, J. Gonzalez-Rodriguez, Adapted user-
dependent multimodal biometric authentication
exploiting general information, Pattern Recogn. Lett.
26 (2005) 2628–2639.
[22] E. Shechtman, M. Irani, Matching local self-similarities
across images and videos, in: Computer Vision and Pat-
tern Recognition, 2007. CVPR’07. IEEE Conference on,
IEEE, 2007, pp. 1–8.
[23] N. Dalal, B. Triggs, Histograms of oriented gradients
for human detection, in: Proceedings of the 2005 IEEE
Computer Society Conference on Computer Vision and
Pattern Recognition (CVPR’05) - Volume 1 - Volume
01, CVPR ’05, IEEE Computer Society, Washington,
DC, USA, 2005, pp. 886–893.
[24] B. Zhang, Off-line signature verification and identifica-
tion by pyramid histogram of oriented gradients, Inter-
national Journal of Intelligent Computing and Cyber-
netics 3 (2010) 611–630.
[25] T. Ojala, M. Pietikainen, D. Harwood, Performance
evaluation of texture measures with classification based
on Kullback discrimination of distributions, 1994, pp.
A:582–585.
[26] R. Wajid, A. Bin Mansoor, Classifier performance eval-
uation for offline signature verification using local bi-
nary patterns, in: Visual Information Processing (EU-
VIP), 2013 4th European Workshop on, 2013, pp. 250–
254.
[27] A. Porebski, N. Vandenbroucke, D. Hamad, LBP his-
togram selection for supervised color texture classifi-
cation, in: Image Processing (ICIP), 2013 20th IEEE
International Conference on, 2013, pp. 3239–3243.
[28] B. Sujatha, V. V. Kumar, P. Harini, A new logical com-
pact LBP co-occurrence matrix for texture analysis, In-
ternational Journal of Scientific & Engineering Research
3 (2) (2012) 1–5.
[29] T. Mäenpää, M. Pietikäinen, Multi-scale binary pat-
terns for texture analysis, in: J. Bigun, T. Gustavs-
son (Eds.), Image Analysis, Vol. 2749 of Lecture Notes
in Computer Science, Springer Berlin Heidelberg, 2003,
pp. 885–892.
[30] X. Qi, Y. Qiao, C.-G. Li, J. Guo, Multi-scale joint en-
coding of local binary patterns for texture and mate-
rial classification, in: British Machine Vision Confer-
ence (BMVC), 2013, pp. 1–11.
[31] L. Zhang, R. Chu, S. Xiang, S. Liao, S. Li, Face de-
17
19. ACCEPTED MANUSCRIPT
A
C
C
E
P
T
E
D
M
A
N
U
S
C
R
I
P
T
tection based on multi-block LBP representation, in:
S.-W. Lee, S. Li (Eds.), Advances in Biometrics, Vol.
4642 of Lecture Notes in Computer Science, Springer
Berlin Heidelberg, 2007, pp. 11–18.
[32] P. Viola, M. Jones, Rapid object detection using a
boosted cascade of simple features, in: Computer Vision
and Pattern Recognition, 2001. CVPR 2001. Proceed-
ings of the 2001 IEEE Computer Society Conference on,
Vol. 1, 2001, pp. I–511–I–518.
[33] T. Ojala, M. Pietikäinen, T. Mäenpää, Multiresolution
gray-scale and rotation invariant texture classification
with local binary patterns, IEEE Trans. Pattern Anal.
Mach. Intell. 24 (7) (2002) 971–987.
[34] K. Barkoula, E. N. Zois, E. Zervas, G. Economou, Off-
line signature verification based on ordered grid fea-
tures: An evaluation., in: M. I. Malik, M. Liwicki,
L. Alewijnse, M. Blumenstein, C. Berger, R. Stoel,
B. Found (Eds.), AFHA, Vol. 1022 of CEUR Workshop
Proceedings, CEUR-WS.org, 2013, pp. 36–40.
[35] D. G. Lowe, Distinctive image features from scale-
invariant keypoints, Int. J. Comput. Vision 60 (2)
(2004) 91–110.
[36] K. E. Mwangi, Offline handwritten signature verifica-
tion using SIFT features, Master’s thesis, Makerere
University, Uganda (2008).
[37] M. B. Yılmaz, B. Yanıkoğlu, Ç. Tırkaz, A. A. Kholma-
tov, Offline signature verification using classifier combi-
nation of HOG and LBP features, in: Proc. of the 2011
Intl. Joint Conf. on Biometrics (IJCB 2011), IJCB ’11,
Washington, DC, USA, 2011, pp. 1–7.
[38] G. S. Eskander, R. Sabourin, E. Granger, Hybrid
writer-independent writer-dependent offline signature
verification system, IET Biometrics 2 (2013) 169–
181(12).
[39] E. J. R. Justino, F. Bortolozzi, R. Sabourin, A com-
parison of SVM and HMM classifiers in the off-line sig-
nature verification, Pattern Recogn. Lett. 26 (9) (2005)
1377–1385.
[40] V. Nguyen, M. Blumenstein, V. Muthukkumarasamy,
G. Leedham, Off-line signature verification using en-
hanced modified direction features in conjunction with
neural classifiers and support vector machines, in: IC-
DAR ’07: Proceedings of the Ninth International Con-
ference on Document Analysis and Recognition, IEEE
Computer Society, Washington, DC, USA, 2007, pp.
734–738.
[41] Y. Guerbai, Y. Chibani, N. Abbas, One-class versus bi-
class SVM classifier for off-line signature verification, in:
Multimedia Computing and Systems (ICMCS), 2012
International Conference on, 2012, pp. 206–210.
[42] R. Bharathi, B. Shekar, Off-line signature verification
based on chain code histogram and support vector ma-
chine, in: Advances in Computing, Communications
and Informatics (ICACCI), 2013 International Confer-
ence on, 2013, pp. 2063–2068.
[43] F. Vargas, M. A. Ferrer, C. M. Travieso, J. B.
Alonso, Off-line handwritten signature GPDS-960 cor-
pus, in: IAPR 9’th International Conference on Docu-
ment Analysis and Recognition, 2007, pp. 764–768.
[44] R. Micheals, T. Boult, Efficient evaluation of classifica-
tion and recognition systems, in: Computer Vision and
Pattern Recognition, 2001. CVPR 2001. Proceedings of
the 2001 IEEE Computer Society Conference on, Vol. 1,
2001, pp. I–50–I–57 vol.1.
[45] M. E. Schuckers, A. Hawley, K. Livingstone,
N. Mramba, A comparison of statistical methods for
evaluating matching performance of a biometric iden-
tification device: a preliminary report, in: A. K. Jain,
N. K. Ratha (Eds.), Biometric Technology for Human
Identification, Vol. 5404 of Society of Photo-Optical
Instrumentation Engineers (SPIE) Conference Series,
2004, pp. 144–155.
[46] C. Shen, Z. Cai, X. Guan, Y. Du, R. Maxion, User
authentication through mouse dynamics, Information
Forensics and Security, IEEE Transactions on 8 (1)
(2013) 16–30.
[47] J. F. V. Bonilla, M. A. F. Ballester, C. M. T. Gonza-
lez, J. B. A. Hernandez, Offline signature verification
based on pseudo-cepstral coefficients, in: Proceedings
of the 2009 10th International Conference on Document
Analysis and Recognition, ICDAR ’09, IEEE Computer
Society, Washington, DC, USA, 2009, pp. 126–130.
[48] R. Larkins, M. Mayo, Adaptive feature thresholding for
off-line signature verification, in: IVCNZ ’08: Proceed-
ings of the 23rd International Conference In Image and
Vision Computing New Zealand, 2008, pp. 1–6.
[49] J. Vargas, C. Travieso, J. Alonso, M. A. Ferrer, Off-
line signature verification based on gray level informa-
tion using wavelet transform and texture features, in:
Frontiers in Handwriting Recognition (ICFHR), 2010
International Conference on, 2010, pp. 587–592.
[50] M. Parodi, J. C. Gomez, A. Belaid, A circular grid-
based rotation invariant feature extraction approach for
off-line signature verification., in: Document Analysis
and Recognition (ICDAR), 2011 International Confer-
ence on, IEEE, 2011, pp. 1289–1293.
[51] V. Nguyen, M. Blumenstein, G. Leedham, Global fea-
tures for the off-line signature verification problem, in:
Proceedings of the 2009 10th International Conference
on Document Analysis and Recognition, ICDAR ’09,
IEEE Computer Society, Washington, DC, USA, 2009,
pp. 1300–1304.
18