SlideShare a Scribd company logo
1 of 11
Download to read offline
Off-line signature verification based on grey level information
using texture features
J.F. Vargas b,n
, M.A. Ferrer a
, C.M. Travieso a
, J.B. Alonso a
a
Instituto para el desarrollo tecnológico y la Innovación en Comunicaciones (IDeTIC), Universidad de Las Palmas de Gran Canaria, Tafira Campus 35017, Las Palmas, Spain
b
Electronic Engineering Department,(GEPAR),Universidad de Antioquia, Medellin, Colombia
a r t i c l e i n f o
Article history:
Received 7 April 2010
Received in revised form
11 June 2010
Accepted 29 July 2010
Keywords:
Off-line handwritten signature verification
Pattern recognition
Grey level information
Texture features
Co-occurrence matrix
Local binary pattern
LS-SVM
a b s t r a c t
A method for conducting off-line handwritten signature verification is described. It works at the global
image level and measures the grey level variations in the image using statistical texture features. The
co-occurrence matrix and local binary pattern are analysed and used as features. This method begins
with a proposed background removal. A histogram is also processed to reduce the influence of different
writing ink pens used by signers. Genuine samples and random forgeries have been used to train an
SVM model and random and skilled forgeries have been used for testing it. Results are reasonable
according to the state-of-the-art and approaches that use the same two databases: MCYT-75 and GPDS-
100 Corpuses. The combination of the proposed features and those proposed by other authors, based on
geometric information, also promises improvements in performance.
& 2010 Elsevier Ltd. All rights reserved.
1. Introduction
The security requirements of today’s society have placed
biometrics at the centre of an ongoing debate concerning its key
role in a multitude of applications [1–3]. Biometrics measure
individuals’ unique physical or behavioural characteristics with the
aim of recognising or authenticating identity. Common physical
biometrics include fingerprints, hand or palm geometry, retina, iris,
or facial characteristics. Behavioural characteristics include signa-
ture, voice (which also has a physical component), keystroke
pattern, and gait. Signature and voice technologies are examples of
this class of biometrics and are the most developed [4].
The handwritten signature is recognised as one of the most
widely accepted personal attributes for identity verification. This
signature is a symbol of consent and authorisation, especially in
the credit card and bank checks environment, and has been an
attractive target for fraud for a long time. There is a growing
demand for the processing of individual identification to be faster
and more accurate, and the design of an automatic signature
verification system is a real challenge. Plamondon and Srihari [5]
noted that automatic signature verification systems occupy a very
specific niche among other automatic identification systems: ‘‘On
the one hand, they differ from systems based on the possession of
something (key, card, etc.) or the knowledge of something
(passwords, personal information, etc.), because they rely on a
specific, well learned gesture. On the other hand, they also differ
from systems based on the biometric properties of an individual
(fingerprints, voice prints, retinal prints, etc.), because the
signature is still the most socially and legally accepted means of
personal identification.’’
A comparison of signature verification with other recognition
technologies (fingerprint, face, voice, retina, and iris scanning)
reveals that signature verification has several advantages as an
identity verification mechanism. Firstly, signature analysis can
only be applied when the person is/was conscious and willing to
write in the usual manner, although it is possible that individuals
may be forced to submit the handwriting sample. To give a
counter example, a fingerprint may also be used when the person
is in an unconscious (e.g. drugged) state. Forging a signature is
deemed to be more difficult than forging a fingerprint, given the
availability of sophisticated analyses [6]. Unfortunately, signature
verification is a difficult discrimination problem since a hand-
written signature is the result of a complex process depending on
the physical and psychological conditions of the signer, as well as
the conditions of the signing process [7]. The net result is that a
signature is a strong variable entity and its verification, even for
human experts, is not a trivial matter. The scientific challenges
and the valuable applications of signature verification have
Contents lists available at ScienceDirect
journal homepage: www.elsevier.com/locate/pr
Pattern Recognition
0031-3203/$ - see front matter & 2010 Elsevier Ltd. All rights reserved.
doi:10.1016/j.patcog.2010.07.028
n
Corresponding author. Tel.: +34 928 451269; fax: +34 928 451243.
E-mail addresses: jfvargas@udea.edu.co (J.F. Vargas), mferrer@dsc.ulpgc.es
(M.A. Ferrer), ctravieso@dsc.ulpgc.es (C.M. Travieso), jalonso@dsc.ulpgc.es
(J.B. Alonso).
Pattern Recognition 44 (2011) 375–385
attracted many researchers from universities and the private
sector to signature verification. Undoubtedly, automatic signature
verification plays an important role in the set of biometric
techniques for personal verification [8,9].
In the present study, we focus on features based on grey level
information from images containing handwritten signatures,
especially those providing information about ink distribution
along traces delineating the signature. Textural analysis meth-
odologies are included for this purpose since they provide
rotation and luminance invariance.
The paper is organised as follows: Section 2 presents the
background to off-line signature verification. Section 3 provides
an overview of statistical texture analysis. Section 4 describes the
approach proposed. Section 5 presents details about the database.
Section 6 is devoted to the classifiers. Section 7 presents the
evaluation protocol and reports the experimental results. The
paper ends with concluding remarks.
2. Background
There are two major methods of signature verification. One
is an on-line method to measure sequential data, such as
handwriting speed and pen pressure, with a special device. The
other is an off-line method that uses an optical scanner to obtain
handwriting data written on paper. There are two main
approaches for off-line signature verification: the static approach
and pseudo-dynamic approach. The static one involves geometric
measures of the signature while the pseudo-dynamic one tries
to estimate dynamic information from the static image [10].
On-line systems use special input devices such as tablets, while
off-line approaches are much more difficult because the only
available information is a static two-dimensional image obtained
by scanning pre-written signatures on a paper; the dynamic
information of the pen-tip (stylus) movement such as pen-tip
coordinates, pressure, velocity, acceleration, and pen-up and pen-
down can be captured by a tablet in real time but not by an image
scanner. The off-line method, therefore, needs to apply complex
image processing techniques to segments and analyse signature
shape for feature extraction [11]. Hence, on-line signature
verification is potentially more successful. Nevertheless, off-line
systems have a significant advantage in that they do not require
access to special processing devices when the signatures are
produced. In fact, if the accuracy of verification systems is
stressed, the off-line method has much more practical application
areas than that of the on-line one. Consequently, an increase in
amount of research has studied feature-extraction methodology
for off-line signature recognition and verification [12].
It is also true that the track of the pen shows a great deal of
variability. No two genuine signatures are ever exactly the same.
Actually, two identical signatures would constitute legal evidence
of forgery by tracing. The normal variability of signatures
constitutes the greatest obstacle to be met in achieving automatic
verification. Signatures vary in their complexity, duration, and
vulnerability to forgery. Signers vary in their coordination and
consistency. Thus, the security of the system varies from user to
user. A short, common name is no doubt easier to forge than a
long, carefully written name, no matter what technique is
employed. Therefore, a system must be capable of ‘‘degrading’’
gracefully when supplied with inconsistent signatures, and the
security risks must be kept to acceptable levels [13].
Problems of signature verification are addressed by taking into
account three different types of forgeries: random forgeries,
produced without knowing either the name of the signer nor the
shape of its signature; simple forgeries, produced knowing the
name of the signer but without having an example of his
signature; and skilled forgeries, produced by people who, after
studying an original instance of the signature, attempt to imitate
it as closely as possible. Clearly, the problem of signature
verification becomes more and more difficult when passing from
random to simple and skilled forgeries, the latter being so difficult
a task that even human beings make errors in several cases.
Indeed, exercises in imitating a signature often allow us to
produce forgeries so similar to the originals that discrimination is
practically impossible; in many cases, the distinction is compli-
cated even more by the large variability introduced by some
signers when writing their own signatures [14]. For instance,
studies on signature shape found that North American signatures
are typically more stylistic in contrast to the highly personalised
and ‘‘variable in shape’’ European ones [15].
2.1. Off-line signature verification based on pseudo-dynamic
features
Dynamic information cannot be derived directly from static
signature images. Instead, some features can be derived that
partly represent dynamic information. These special character-
istics are referred to as pseudo-dynamic information. The term
‘‘pseudo-dynamic’’ is used to distinguish real dynamic data,
recorded during the writing process, from information, which
can be reconstructed from the static image [15].
There are different approaches to the reconstruction of
dynamic information from static handwriting records. Techniques
from the field of forensic document examination are mainly based
on the microscopic inspection of the writing trace and assump-
tions about the underlying writing process [16]. Another paper
from the same author [17] describes their studies on the influence
of physical and bio-mechanical processes on the ink trace and
aims at providing a solid foundation for enhanced signature
analysis procedures. Simulated human handwriting movements
are considered by means of a writing robot to study the
relationship between writing process characteristics and ink
deposit on paper. Approaches from the field of image processing
and pattern recognition can be divided into: methods for
estimating the temporal order of stroke production [18,19];
methods inspired by motor control theory, which recover
temporal features on the basis of stroke geometries such as
curvature [20]; and finally, methods analysing stroke thickness
and/or stroke intensity variations [21–25]. An analysis of mainly
grey level distribution, in accordance with methods of the last
group, is reported in this paper. A grey level image of a scanned
handwritten signature indicates that some pixels may represent
shapes written with high pressure, which appear as darker zones.
High pressure points (HPPs) can be defined as those signature
pixels which have grey level values greater than a suitable
threshold. The study of high pressure features was proposed by
Ammar et al. [21] to indicate regions where more physical effort
was made by the signer. This idea of calculating a threshold to
find the HPP was adopted and developed by others researchers
[26,14]. Lv et al. [27] set two thresholds to store only the
foreground points and edge points. They analyse only the
remaining points whose grey level value is between the two
thresholds and divide them into 12 segments. The percentage of
the points whose grey level value falls in the corresponding
segment is one of the values of the feature vector that reflects the
grey level distribution. Lv and co-workers also consider stroke
width distribution. In order to analyse not only HPPs but also low
pressure points (LPP) a complementary threshold has been
proposed by Mitra et al. [28]. In a previous work, we use a radial
and angular partition (RAP) for a local analysis to determine the
ratio, over each cell, between HPPs and all points conforming the
J.F. Vargas et al. / Pattern Recognition 44 (2011) 375–385
376
binary version of the image [29]. Franke [30] evaluates ink-trace
characteristics that are affected by the interaction of bio-
mechanical writing and physical ink-deposition processes. The
analysis focused on the ink intensity, which is captured along
the entire writing trace of a signature. The adaptive segmentation
of ink-intensity distributions takes the influences of different
writing instruments into account and supports the cross-valida-
tion of different pen probes. In this way, texture analysis of ink
trace appears as an interesting approach to characterise personal
writing for enhanced handwritten signature verification procedures.
3. Statistical texture analysis
Statistical texture analysis requires the computation of texture
features from the statistical distribution of observed combina-
tions of intensities at specified positions relative to each other
in an image. The number of intensity points (pixels) in each
combination is identified, leading to the classification of the
texture statistics as first-order, second-order, or higher-order.
Biometric systems based on signature verification, in conjunc-
tion with textural analysis, can reveal information about ink-
pixels distribution, which reflects personal characteristics from
the signer, i.e. pen-holding, writing speed, and pressure.
But we do not think that only ink distribution information is
sufficient for signer identification. So, in the specific case of
signature strokes, we have also taken into account, for the textural
analysis, the pixels in the stroke contour. By this we mean those
stroke pixels that are in the signature-background border. These
pixels will include statistical information about the signature
shape. So this distribution data may be considered as a
combination of textural and shape information.
3.1. Statistical features of first order
Statistical features of first order, as represented in a histogram,
take into account the individual grey level value of each pixel in
an image Iðx,yÞ,1rxrN,1ryrM, but the spatial arrangement is
not considered, i.e. different spatial features can have the same
level histogram. A classical way of parameterising the histogram
is to measure its average and standard deviation.
Obviously, the discriminative ability of first order statistics is
really low for automatic signature verification, especially when
user and forger use a similar writing instrument. In fact, most
researchers normalise the histogram, so as to reduce the noise for
the subsequent processing of the signature.
3.2. Grey level co-occurrence matrices
The grey level co-occurrence matrix (GLCM) method is a way
of extracting second order statistical texture features from the
image [31]. This approach has been used in a number of
applications, including ink type analysis [16], e.g. [32–34].
A GLCM of an image I(x,y) is a matrix Pði,jDx,DyÞ,0rirG1,
0rjrG1, where the number of rows and columns are equal to
the number of grey levels G. The matrix element P(i, j9Dx, Dy) is the
relative frequency with which two pixels with grey levels i and j
occur separated by a pixel distance (Dx, Dy). For simplicity, in the rest
of the paper, we will denote the GLCM matrix as P(i, j).
For a statistically reliable estimation of the relative frequency
we need a sufficiently large number of occurrences for each event.
The reliability of P(i, j) depends on the grey level number G and
the I(x, y) image size. In the case of images containing signatures,
instead of image size, this depends on the number of pixels in the
signature strokes. If the statistical reliability is not sufficient, we
need to reduce G to guarantee the minimum number of pixels
transitions per P(i, j) matrix component, despite losing texture
description accuracy. The grey level number G can be reduced
easily by quantifying the image I(x, y).
The classical feature measures extracted from the GLCM
matrix (see Haralick [32] and Conners and Harlow [31]) are the
following:
Texture homogeneity H:
H ¼
X
G1
i ¼ 0
X
G1
j ¼ 0
fPði,jÞg2
ð1Þ
A homogeneous scene will contain only a few grey levels,
giving a GLCM with only a few but relatively high values of P(i, j).
Thus, the sum of squares will be high.
Texture contrast C:
C ¼
X
G1
n ¼ 0
n2
X
G1
i ¼ 0
X
G1
j ¼ 0
Pði,jÞ
8

:
9
=
;
, 9ij9 ¼ n ð2Þ
This measure of local intensity variation will favour contribu-
tions from P(i, j) away from the diagonal, i.e iaj.
Texture entropy E:
E ¼
X
G1
i ¼ 0
X
G1
j ¼ 0
Pði,jÞlogfPði,jÞg ð3Þ
Non-homogeneous scenes have low first order entropy, while a
homogeneous scene reveals high entropy.
Texture correlation O:
O ¼
X
G1
i ¼ 0
X
G1
j ¼ 0
ijPði,jÞðmimjÞ
sisj
ð4Þ
where mi and si are the mean and standard deviation of P(i, j)
rows, and mj and sj the mean and standard deviation of P(i, j)
columns, respectively.
Correlation is a measure of grey level linear dependence
between pixels at the specified positions relative to each other.
3.3. Local binary patterns
The local binary pattern (LBP) operator is defined as a grey
level invariant texture measure, derived from a general definition
of texture in a local neighbourhood, the centre of which is the
pixel (x, y). Recent extensions of the LBP operator have shown it to
be a really powerful measure of image texture, producing
excellent results in many empirical studies. LBP has been applied
in biometrics to the specific problem of face recognition [35,36].
The LBP operator can be seen as a unifying approach to the
traditionally divergent statistical and structural models of texture
analysis. Perhaps the most important property of the LBP operator
in real-world applications is its invariance to monotonic grey level
changes. Equally important is its computational simplicity, which
makes it possible to analyse images in challenging real-time
settings [37].
The local binary pattern operator describes the surroundings
of the pixel (x, y) by generating a bit-code from the binary
derivatives of a pixel as a complementary measure for local image
contrast. The original LBP operator takes the eight neighbouring
pixels using the centre grey level value I(x, y) as a threshold. The
operator generates a binary code 1 if the neighbour is greater than
or equal to the central level, otherwise it generates a binary code
0. The eight neighbouring binary codes can be represented by an
8-bit number. The LBP operator outputs for all the pixels in the
image can be accumulated to form a histogram, which represents
a measure of the image texture. Fig. 1 shows an example of a LBP
operator.
J.F. Vargas et al. / Pattern Recognition 44 (2011) 375–385 377
The above LBP operator is extended in [38] to a generalised
grey level and rotation invariant operator. The generalised LBP
operator is derived on the basis of a circularly symmetric
neighbour set of P members on a circle of radius R. The parameter
P controls the quantisation of the angular space and R determines
the spatial resolution of the operator. The LBP code of central pixel
(x, y) with P neighbours and radius R is defined as
LPBP,Rðx,yÞ ¼
X
P1
p ¼ 0
sðgpgcÞ2p
ð5Þ
where sðlÞ ¼
1 lZ0
0 lo0
,
(
the unit step function, gc the grey level
value of the central pixel: gc¼I(x, y), and gp the grey level of the
pth neighbour, defined as
gp ¼ I xþRsin
2pp
P
,yRcos
2pp
P
 
ð6Þ
If the pth neighbour does not fall exactly in the pixel position,
its grey level is estimated by interpolation. An example can be
seen in Fig. 2.
In a further step, [38] defines a LBPP,R operator invariant to
rotation as follows:
LBPriu2
P,R ðx,yÞ ¼
X
P1
p ¼ 0
sðgpgcÞ if Uðx,yÞr2
Pþ1 otherwise
8





:
ð7Þ
where
Uðx,yÞ ¼
X
P
p ¼ 1
sðgpgcÞsðgp1gcÞ



, with gP ¼ g0 ð8Þ
Analysing the above equations, U(x, y) can be calculated as
follows:
(1) work out the function fðpÞ ¼ sðgpgcÞ,0opoP considering
gP¼g0;
(2) obtain its derivate: fðpÞf ðp1Þ,1rprP;
(3) calculate the absolute value: 9f ðpÞfðp1Þ9,1rprP; and
(4) obtain U(x, y) as the integration or sum
PP
P ¼ 1 9fðpÞfðp1Þ9.
If the grey levels of the pixel (x, y) neighbours are uniform or
smooth, as in the case of Fig. 3, left, f(p) will be a sequence of ‘‘0’’
or ‘‘1’’ with only two transitions. In this case U(x, y) will be zero or
two and the LBPriu2
P,R code is worked out as the sum
PP1
p ¼ 0 fðpÞ.
Conversely, if the surrounding grey levels of pixel (x, y) vary
quickly, as in the case of Fig. 3, right, f(p) will be a sequence
containing several transitions ‘‘0’’–‘‘1’’ or ‘‘1’’–‘‘0’’ and U(x, y) will
be greater than 2. So, in the noisy case, a constant value equal to
P+1 is assigned to LBPriu2
P,R making it more robust to noise than
previously defined LBP operators.
The rotation invariance property is guaranteed because when
summing the f(p) sequence to obtain the LBPriu2
P,R , it is not weighted
by 2p
. As f(p) is a sequence of 0 and 1, 0rLBPriu2
P,R ðx,yÞrPþ1. As
textural measure, we will use its P+2 histogram bins of LBPriu2
P,R ðx,yÞ
codes.
From the three LBP codes proposed in this section, LBP, LBPP,R,
and LBPriu2
P,R , we will use LBPriu2
P,R in this paper, because of its
property of rotational invariance.
4. Textural analysis for signature verification
The analysis of the writing trace in signatures becomes an
application area of textural analysis. The textural features from
the grey level image can reveal personal characteristics of the
signer (i.e. pressure and speed changes, pen-holding, etc.)
complementing classical features proposed in the literature. In
this section we describe a basic scheme for using textural analysis
in automatic signature verification.
Fig. 1. Working out the LBP code of pixel (x, y). In this case I(x, y)¼3, and its LBP
code is LBP(x, y)¼143.
Fig. 2. The surroundings of I(x, y) central pixel are displayed along with the pth neighbours, marked with black circles, for different P and R values. Left: P¼4, R¼1, and the
LPB4,1(x, y) code is obtained by comparing gc ¼I(x, y) with gp¼ 0 ¼I(x, y1), gp¼ 1 ¼I(x+1, y), gp ¼2 ¼I(x, y+1), and gp ¼ 3¼I(x1, y). Centre: P¼4, R¼2, and the LPB4,2(x, y) code
is obtained by comparing gc ¼I(x, y) with gp ¼0 ¼I(x, y2), gp¼ 1 ¼I(x+2, y), gp¼ 2 ¼I(x, y+2), and gp ¼ 3¼I(x2, y). Right: P¼8, R¼2, and the LPB8,2(x, y) code is obtained by
comparing gc ¼I(x, y) with gp ¼ 0¼I(x, y2), gp ¼ 1 ¼ Iðxþ
ffiffiffi
2
p
,y
ffiffiffi
2
p
Þ, gp¼ 2 ¼I(x+2, y), gp ¼ 3 ¼ Iðxþ
ffiffiffi
2
p
,yþ
ffiffiffi
2
p
Þ, gp¼ 4 ¼I(x, y+2), gp ¼ 5 ¼ Iðx
ffiffiffi
2
p
,yþ
ffiffiffi
2
p
Þ, gp ¼ 6¼I(x2, y), and
gp ¼ 7 ¼ Iðx
ffiffiffi
2
p
,y
ffiffiffi
2
p
Þ.
Fig. 3. Calculating the LBPriu2
P,R code for two cases, with P¼4 and R¼2. Left: gc¼152,
{g0, g1, g2, g3}¼{154, 156, 155, 149}, {f(0), f(1), f(2), f(3), f(4)}¼{1, 1, 1, 0, 1}, and
U(x, y)¼0+0+1+1¼2r2, therefore LBPriu2
P,R ðx,yÞ ¼ 1þ1þ1þ0 ¼ 3. Right: gc ¼154,
g0,g1,g2,g3
 
¼ 155,152,159,148
f g, {f(0) ,f(1) ,f(2) ,f(3) ,f(4)}¼{1 ,0, 1, 0, 1}, U(x,
y)¼1+1+1+1¼4Z2, and LBPriu2
P,R ðx,yÞ ¼ Pþ1 ¼ 5. (a) Smooth and uniform grey
level change and (b) noisy grey level surroundings. The numbers and the shade
intensity represent the grey levels
J.F. Vargas et al. / Pattern Recognition 44 (2011) 375–385
378
4.1. Background removal
The features used in our system characterise the grey level
distribution in an image signature but also require a procedure for
background elimination. Grey levels corresponding to the back-
ground are not discriminating information but adding noise can
negatively affect the characterisation.
In this work, we have used a simple posterisation procedure to
avoid background influence. Obviously, any other efficient
segmentation procedure would also be useful. Posterisation
occurs when an image apparent bit depth has been decreased
so much that it has a visual impact. The term ‘‘posterisation’’ is
used because it can influence the image in a similar way to the
colour range in a mass-produced poster, where the print process
uses a limited number of coloured inks.
Let I(x, y) be a 256-level grey scale image and nL +1 the number
of grey levels considered for posterisation. The posterised image
IP(x, y) is defined as follows:
IPðx,yÞ ¼ round round
Iðx,yÞnL
255
 
255
nL
 
ð9Þ
where round(U) rounds the elements to the nearest integers. The
interior round performs the posterisation operation, and the exterior
round guarantees that the resulting grey level of IP(x, y) is an integer.
In the results presented in this paper, with MCYT and GPDS
Corpuses, we have used a value of nL¼3 obtaining a 4-grey level
posterised image, the grey levels being 0, 85, 170, and 255.
Perceptually, valid values can be nL¼3 or 4. With values of
nL ¼ 1 or 2 the signature is half erased and this is not a valid
segmentation. With a value of nL ¼3 the signature strokes are well
preserved and the background appears nearly clean. With values
of nL 43, mainly in the MCYT Corpus, more and more salt and
pepper noise appears in the background. In order to avoid
posterior image processing and eliminate the salt and pepper
noise, a value of nL ¼3 was selected.
The images from both corpuses consist of dark strokes against a
white background. In the posterised image the background appears
white (grey level equal to 255) and the signature strokes appear
darker (grey levels equal to 0, 85, or 170). Therefore, to obtain the
Ibw(x, y) binarised signature (black strokes and white background)
we apply a simple thresholding operation, as follows:
Ibwðx,yÞ ¼
255 if IPðx,yÞ ¼ 255
0 otherwise

ð10Þ
The black and white Ibw(x, y) image is used as a mask to
segment the original signature and the segmented signature is
obtained as
ISðx,yÞ ¼
255 if Ibwðx,yÞ ¼ 255
Iðx,yÞ otherwise
(
ð11Þ
At this point, a complete segmentation between background
and foreground is achieved. An example of the above described
procedure can be seen in Fig. 4.
4.2. Histogram displacement
This section is aimed at reducing the influence of the different
writing ink pens on the segmented signature. We achieve this by
displacing the histogram of the signature pixels toward zero,
keeping the background white with grey level equal to 255.
Assuring that the grey level value of the darkest signature pixel is
always 0, the dynamic range will reflect features only of the
writing style. This can be carried out by subtracting the minimum
grey level value in the image from the signature pixels, as follows:
IGðx,yÞ ¼
ISðx,yÞ if ISðx,yÞ ¼ 255
ISðx,yÞminfISðx,yÞg otherwise
(
ð12Þ
where IG(x, y) is the segmented image histogram displaced toward
zero. Fig. 5 illustrates the effect of this displacement.
4.3. Feature extraction
After the segmentation and signature histogram displacement,
the image is cropped to fix the signature size and it is resized to
N¼512 and M¼512. The aim of these adjustments is to improve
the scale invariance. As an interpolation method, we use the
nearest neighbour. This is in order to keep the ink texture as
invariant as possible.
4.3.1. GLCM features
To calculate GLCM features, we have to assume the statistic
significance of the Pði,j9Dx,DyÞ,0ri,jrG1 GLCM matrix estima-
tion. If we follow the rule of 3 [39], which supposes an
independent, identical distribution, a 1% estimation error with a
95% of confidence limit will require at least 300 samples per
component. As P(i, j) contains G2
components, the number of pixel
transitions that we will need for a reliable estimation of all the
P(i, j) components will be 300  G2
.
The number of signature pixels for each signature in our databases
has been worked out in its histogram, depicted in Fig. 6. To guarantee
statistical significance at the 98% level for the signatures in the
databases, we work out the 2nd percentile that corresponds to 23,155
pixels. Then, the number of grey levels should be
23,1554300 G2
-Go
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
23,155=300
p
¼ 8:78 ð13Þ
We need to take into account that the number of grey levels G
is an integer. So, in order to obtain a reliable estimation of the
GLCM matrix, the signature images will be quantified to G¼8 grey
levels to calculate the P(i, j) matrix, despite losing texture
resolution. Experiments with 16 and 32 grey levels have also
Fig. 4. Posterisation procedure: (a) original image I(x, y) with 256 grey levels, (b) posterised image IP(x, y) with nL¼3: 4 grey levels, (c) binarised image Ibwðx,yÞ, and (d)
segmented image Is(x, y): original signature with the background converted to white (grey level equal to 255).
J.F. Vargas et al. / Pattern Recognition 44 (2011) 375–385 379
been performed and the resulting final equal error rate has
confirmed that it is preferable to have a reliable GLCM matrix
estimation than to increase the texture resolution. The quantified
image IQ(x, y) is obtained from IG(x, y) as follows:
IQ ðx,yÞ ¼ round fix
IGðx,yÞG
255
 
255
G
 
ð14Þ
where fix rounds toward zero, and the exterior round is to
guarantee integer grey levels in the IQ(x, y) image.
Once the signature image with G¼8 grey levels has been
quantified, four GLCM matrices of size G  G ¼ 8  8 ¼ 64 are
worked out: P1 ¼ Pði,j9Dx ¼ 1,Dy ¼ 0Þ, P2 ¼ Pði,j9Dx ¼ 1,Dy ¼ 1Þ,
P3 ¼ Pði,j9Dx ¼ 0,Dy ¼ 1Þ, and P4 ¼ Pði,j9Dx ¼ 1,Dy ¼ 1Þ. These
GLCM matrices correspond to joint probability matrices that
relate the grey level of the central pixel (x, y) with the pixel on its
right (x+1, y), right and above (x+1, y+1), above (x, y+1), and left
and above (x1, y+1). We do not need to work out more GLCM
matrices because, for instance, the relation of pixel (x, y) with
pixel (x1, y1) is taken into account when the central pixel is at
(x1, y1).
The textural measures obtained for each GLCM matrix are the
following: homogeneity, contrast, entropy, and correlation, all of
which are defined in Section 3. So we have 16 textural measures
(4 measures of 4 different matrices) to calculate. These are
reduced to 8, following the suggestion of Haralick [32].
Suppose that Hi,Ci,Ei, and Oi are the homogeneity, contrast,
entropy, and correlation textural measures, respectively, of
Pi,1rir4. We define the 4-element vector M containing the
average of each textural measure as
M ¼ mean
1 ri r4
Hi, mean
1r ir 4
Ci, mean
1r ir 4
Ei, mean
1r ir 4
Oi
 	
ð15Þ
where the ‘‘mean’’ of the vector is
mean
1r ir 4
Hi ¼
1
4
X
4
i ¼ 1
Hi ð16Þ
and the four-component vector R, containing the range of each
textural measure, is
R ¼ range
1 ri r4
Hi, range
1r ir 4
Ci, range
1r ir 4
Ei, range
1r ir 4
Oi
( )
ð17Þ
where the ‘‘range’’ is the difference between the maximum and
the minimum values, i.e.
range
1r ir 4
Hi ¼ max
1r ir 4
Hi min
1r ir 4
Hi ð18Þ
The eight-components feature vector is obtained by concate-
nating the M and R vectors:
GLCM Feature Vector ¼ fM,Rg ð19Þ
Fig. 5. Histogram preprocessing. Upper: histogram and signature detail of image IS(x, y), lower: histogram and signature detail of IG(x, y), which is darker than IS(x, y). Note
that IS(x, y) histogram finishes abruptly at grey level 213 because of the posterisation process with nL¼3: as roundð212nL=255Þ ¼ 2, pixels with grey level 212 remain within
the signature stroke, and as roundð213nL=255Þ ¼ 3, pixels with grey level 213 go to the background.
Fig. 6. Number of signature pixel histograms for both databases considered in this
paper.
J.F. Vargas et al. / Pattern Recognition 44 (2011) 375–385
380
4.3.2. LBP features
To extract the feature set of the signature image IG(x, y) based
on LBP, we have chosen the rotation invariant operator LBPriu2
P,R
defined in Section 3. We have studied two cases. For the first case
we consider P¼8 and R¼1, which obtain the LBPriu2
8,1 ðx,yÞ code,
thresholding each pixel with the 8 neighbouring pixels. The
proposed feature vector is the normalized histogram of
LBPriu2
8,1 ðx,yÞ. As 0rLBPriu2
8,1 ðx,yÞrPþ1 ¼ 9, the histogram is calcu-
lated with 10 bins as follows:
hisLBP8,1ðx,yÞðlÞ ¼ #
n
ðx,yÞ LBPriu2
8,1 ðx,yÞ ¼ l



o 1rxrN ¼ 512
1ryrM ¼ 512
0rlrPþ1 ¼ 9
ð20Þ
where # means ‘‘number of times’’. The normalized histogram is
obtained from
hisLBP8,1
Feature VectorðlÞ ¼
hisLBP8,1ðx,yÞðlÞ
PP þ1
l ¼ 0 hisLBP8,1ðx,yÞðlÞ
, 0rlrPþ1 ¼ 9
ð21Þ
For the second case analysed, the feature vector is obtained
from the rotation invariant LBPriu2
P,R code with P¼16 and R¼2. In
this case LBPriu2
16,2ðx,yÞ, we consider the second ring around the (x, y)
pixel. As 0rLBPriu2
16,2ðx,yÞrPþ1 ¼ 17, the normalized histogram
will contain 18 bins, and the feature vector will be
LBPriu2
16,2 Feature Vector ðlÞ ¼
hisLBP16,2ðx,yÞðlÞ
PP þ1
l ¼ 0 hisLBP16,2ðx,yÞðlÞ
, 0rlrPþ1 ¼ 17
ð22Þ
It should be noted that by including the pixels in the border of
the signature in the GLCM and LBPriu2
P,R matrices, both matrices will
include a statistical measure of the signature shape. This means
how many pixels in the signature border are oriented north, north
west, etc. This results from the background having a grey level
equal to 255 (224 in the case of GLCM because of the quantisation
with G¼8).
5. Database
We have used two databases for testing the proposed grey
level based features. Both have been scanned at 600 dpi, which
guarantees a sufficient grey texture representation. The main
differences between them are the pens used. In the MCYT
database all the signers, genuine, and forger are signed with the
same pen on the same surface. Instead, in the GPDS database, all
the users signed with their own pens on different surfaces. So,
similar results with both databases will point to a measure of ink
independence of the proposed features.
5.1. GPDS-100 Corpus
The GPDS-100 signature corpus contains 24 genuine signa-
tures and 24 forgeries of 100 individuals [25], producing
100  24¼2400 genuine signatures and the same for forgeries.
The genuine signatures were taken in just one session to avoid
scheduling difficulties. The repetitions of each genuine signature
and forgery specimen were collected using each participant’s own
pen on white A4 sheets of paper, featuring two different box sizes:
the first box is 5 cm wide and 1.8 cm high and the second box is
4.5 cm wide and 2.5 cm high. Half of the genuine and forged
specimens were written in each size of box. The forgeries were
collected on a form with 15 boxes. Each forger form shows 5
images of different genuine signatures chosen randomly. The
forger imitated each one 3 times for all 5 signatures. Forgers were
given unlimited time to learn the signatures and perform the
forgeries. The complete signing process was supervised by an
operator.
Once the signature forms were collected, each form was
scanned with a Canon device using 256-level grey scale and
600 dpi resolution. All the signature images were saved in PNG
format.
5.2. MCYT Corpus
The off-line subcorpus of the MCYT signature database [10]
was used. The whole corpus comprises fingerprint and on-line
signature data for 330 contributors from 4 different Spanish sites.
Skilled forgeries are also available in the case of signature data.
Forgers are given the signature images of clients to be forged and,
after training, they are asked to imitate the shape. Signature data
were always acquired with the same ink pen and paper templates
over a pen tablet. Therefore, signature images are also available
on paper. Paper templates of 75 signers (and their associated
skilled forgeries) have been digitised with a scanner at 600 dpi.
The resulting off-line subcorpus has 2250 images of signatures,
with 15 genuine signatures and 15 forgeries per user. This
signature corpus is publicly available at http://atvs.ii.uam.es.
6. Classification
Once the feature matrix is estimated, we need to solve a two-
class classification (genuine or forgery) problem. A brief descrip-
tion of the classification technique used in the verification stage
follows.
6.1. Least squares support vector machines
To model each signature, a least squares support vector
Machine (LS-SVM) has been used. SVMs have been introduced
within the context of statistical learning theory and structural risk
minimisation. Least squares support vector machines (LS-SVM)
are reformulations to standard SVMs, which lead to solving
indefinite linear (KKT) systems. Robustness, sparseness, and
weightings can be imposed on LS-SVMs where needed and a
Bayesian framework with three levels of inference has been
developed [40] for this purpose.
Only one linear equation has to be solved in the optimization
process, which not only simplifies the process, but also avoids the
problem of local minima in SVM. The LS-SVM model is defined in
its primal weight space by
^
yðxÞ ¼ xT
jðxÞþb ð23Þ
where j(x) is a function that maps the input space into a higher
dimensional feature space, x is the M-dimensional vector, and x
and b the parameters of the model. Given N input–output learning
pairs ðxi
,yi
ÞARM
xR,1rirN, least squares support vector ma-
chines seek the x and b that minimize
min
x,b,e
Jðo,eÞ ¼
1
2
xT
xþg
1
2
X
N
i ¼ 1
e2
i ð24Þ
subject to
yi
¼ xT
jðxi
Þþbþei
, 1rirN ð25Þ
In our case we use as j(x) mapping function a Gaussian RBF
kernel. The meta parameters of the LS-SVM model are the width C
of the Gaussian and the g regularisation factor. The training
method for the estimation of x and b can be found in [40]. In this
work, the meta parameters (g, C) were established using a grid
search. The LS-SVM trained for each signer uses the same (g, C)
J.F. Vargas et al. / Pattern Recognition 44 (2011) 375–385 381
meta parameters; further details about model construction are
given in the next section.
7. Evaluation protocol
7.1. Experiments
Each signer is modelled by a LS-SVM, which is trained with 5
and 10 genuine samples so as to compare the performance of the
model with the number of training samples. These samples were
chosen randomly. Random forgeries (genuine samples from other
signers) were used as negative samples, in a similar way to that
outlined by [41], in our case taking a genuine sample of each one
of the other users of the database (74 for the case of the MCYT
Corpus and 99 for the GPDS Corpus). Keeping in mind the limited
number of samples in the training, leave-one-out cross-validation
(LOOCV) was used to determine the parameters of the SVM
classifier with RBF kernel (g, C).
For testing, random and skilled forgeries were taken into
account. For random forgeries, we select a genuine sample of each
one of the other users of the database (different to the one used
for training). For skilled forgeries, all available forgeries were
used; this is 15 for the MCYT Corpus and 24 for the GPDS Corpus.
Training and testing procedure was repeated 10 times with
different training and testing subsets for the purpose of obtaining
reliable results. Two classical types of error were considered: Type
I error or false rejection rate (FRR), which is when an authentic
signature is rejected, and Type II error or false acceptance rate
(FAR), which is when a forgery is accepted. Finally the equal error
rate (EER) was calculated, keeping in mind that the classes are
unbalanced.
To calculate FAR and FRR we need to define a threshold. As the
LS-SVM has been trained as a target value +1 for genuine
signature and 1 for forgeries, we have chosen an a priori
constant threshold equal to 0 for all the signers, i.e. if the LS-SVM
returns a value greater or equal than 0, the signature is accepted
as genuine. If the LS-SVM returns a value lesser than 0, the
signature is considered a forgery and consequently rejected.
7.2. Results
Experiments were carried out using different values for LBPriu2
P,R
parameters R and P. First, values were set to R¼1 and P¼8. Then
they were set to R¼2 and P¼16. Finally, a combination at feature
level of both pairs was used. Table 1 shows results obtained using
5 genuine samples for training and evaluating with skilled
forgeries. As can be seen the best results were obtained using
the combination LBPriu2
8,1 þLBPriu2
16,2. This makes sense, because the
new feature vector of length 10+18¼28 includes information on
the first and second pixels rings around the central pixel. Tables 2
and 3 show more detailed information about results obtained
with LBPriu2
8,1 þLBPriu2
16,2. Tables 4 and 5 present results for GLCM
characterisation.
In order to study the system performance when using a
combination of LBPriu2
8,1 þLBPriu2
16,2and GLCM parameters, a feature
level fusion was carried out to obtain a feature vector of
dimension 10+18+8, equal to 36. Tables 6 and 7 present the
results obtained for random and skilled forgeries, respectively. It
is easy to see that the EER decreases when combining the
different grey level based features. So it seems that LBPriu2
P,R and
GLCM texture measures are uncorrelated, which is logical, because
each texture measure is based on a different principle: LBPriu2
P,R is
based on thresholding and GLCM is based on joint statistics.
As stated above, the quality of the texture based parameters is
not solely due to the discriminative ability of the texture to
identify writers. The texture parameters, as defined, also include
shape information when including pixels in the stroke border. To
Table 1
Results using LBPriu2
P,R . Trained with 5 samples and tested with skilled forgeries.
LBPriu2
P,R parameters Data set FAR (%) FRR (%) EER (%)
R¼1, P¼8 MCYT 3.35 30.72 14.30
R¼2, P¼16 MCYT 3.17 28.37 13.25
{R¼1, P¼8}+{R¼2, P¼16} MCYT 5.00 24.56 12.82
R¼1, P¼8 GPDS-10 3.90 32.51 16.54
R¼2, P¼16 GPDS-10 4.24 30.07 15.66
{R¼1, P¼8}+{R¼2, P¼16} GPDS-10 6.17 22.49 13.38
Table 2
Results using LBPriu2
8,1 þLBPriu2
16,2. Tested with random forgeries.
Training Data set FAR (%) FRR (%) EER (%) FAR (s) FRR (s)
5 samples MCYT 0.75 26.40 3.81 0.74 9.22
GPDS-100 0.36 26.64 4.59 0.40 8.40
10 samples MCYT 1.52 15.23 2.38 0.82 9.32
GPDS-100 0.73 14.29 2.41 0.50 6.52
Table 3
Results using LBPriu2
8,1 þLBPriu2
16,2. Tested with skilled forgeries.
Training Data set FAR (%) FRR (%) EER (%) FAR (s) FRR (s)
5 samples MCYT 5.00 24.56 12.82 3.79 9.70
GPDS-100 6.17 22.49 13.38 3.90 8.29
10 samples MCYT 9.84 13.20 10.68 3.70 8.65
GPDS-100 10.05 11.36 10.53 3.76 5.77
Table 4
Results using GLCM based features. Tested with random forgeries.
Training Data set FAR (%) FRR (%) EER (%) FAR (s) FRR (s)
5 samples MCYT 3.12 32.76 6.65 2.19 7.49
GPDS-100 0.46 37.39 6.40 0.45 6.12
10 samples MCYT 5.68 21.39 6.68 1.73 8.48
GPDS-100 1.19 26.34 4.31 0.74 7.11
Table 5
Results using GLCM based features. Tested with skilled forgeries.
Training Data set FAR (%) FRR (%) EER (%) FAR (s) FRR (s)
5 samples MCYT 6.49 30.93 16.27 4.16 8.78
GPDS-100 2.91 35.07 17.12 2.34 7.29
10 samples MCYT 9.72 21.47 12.65 3.45 8.27
GPDS-100 4.92 24.61 12.18 2.54 7.24
Table 6
Results using LBPriu2
8,1 þLBPriu2
16,2 þGLCM. Tested with random forgeries.
Training Data set FAR (%) FRR (%) EER (%) FAR (s) FRR (s)
5 samples MCYT 0.86 24.21 3.64 0.76 9.77
GPDS-100 0.27 21.87 3.75 0.34 9.62
10 samples MCYT 1.53 12.00 2.20 0.83 8.16
GPDS-100 0.55 10.35 1.76 0.43 5.83
J.F. Vargas et al. / Pattern Recognition 44 (2011) 375–385
382
verify such an hypothesis, we have converted the signatures to
black and white and worked out the LBP and GLCM matrices. The
results are given in Tables 8 and 9. It can be seen that the results
are just a little bit worse than those in Tables 6 and 7. This
confirms that the texture features contain shape information and
that the grey level data provide some information about the
writer.
A comparison of the performance of different signature verifica-
tion systems is a difficult task since each author constructs his own
signature data sets. The lack of a standard international signature
database continues to be a major problem for performance
comparison. For the sake of completeness, in Table 10 we present
some results obtained by published studies that used the MCYT
database. Although it is not possible to carry out a direct comparison
of the results, since the methodologies of training and testing and
the classification strategies used by each author are different,
Table 10 enables one to visualise results from the proposed
methodology alongside results published by other authors.
The next step in analysing grey scale based features is to
combine them with geometrical based features. It is supposed
that the two types of features will be uncorrelated and their
combination will improve the automatic handwritten signature
verification (AHSV) scheme. For geometrical based features we
have used the contour-hinge algorithm proposed in [46] and used
in [45] for AHSV with the MCYT Corpus. Table 11 shows the
results obtained by [45] using the contour-hinge algorithm and
with our implementation of it. To compare the results, we need to
take into account that in our work we have not use a score
normalization, i.e. the threshold is 0 for all the users. On the other
hand, [45] uses a user-dependent a posteriori score normalization,
that is to say, their EER is an indication of the level of performance
with an ideal score alignment between users. The score
normalization used by [45] is as follows: s0
¼ssl, where s is
the raw similarity score computed by the signature matcher, s0
is
the normalized similarity score, and sl is the user dependent
decision threshold at the ERR obtained from a set of genuine and
impostor scores for the user l. So, for a fair comparison we give
our results with the score normalization of [45] and without score
normalization. As can be seen the counter-hinge parameters work
slightly better with the MCYT than with the GPDS Corpus.
Tables 12–17 present results that confirm how features based
on grey level information can be combined with features based on
binary images to improve overall system performance. These
tables offer information on the feature level combination of
LBPriu2
8,1 þLBPriu2
16,2 þGLCMþcontour-hinge features. Again 5 and 10
genuine samples were used, respectively, in the training set for
positive samples, and random forgeries (genuine samples from
Table 7
Results using LBPriu2
8,1 þLBPriu2
16,2 þGLCM. Tested with skilled forgeries.
Training Data set FAR (%) FRR (%) EER (%) FAR (s) FRR (s)
5 samples MCYT 4.53 23.25 12.02 3.55 9.26
GPDS-100 5.13 20.82 12.06 3.43 8.44
10 samples MCYT 7.53 12.61 8.80 3.96 9.66
GPDS-100 8.64 9.66 9.02 3.52 6.52
Table 8
Results using LBPriu2
8,1 þLBPriu2
16,2 þGLCM with BW signatures. Tested with random
forgeries.
Training Data set FAR (%) FRR (%) EER (%) FAR (s) FRR (s)
5 samples MCYT 0.57 26.44 3.65 0.58 9.42
GPDS-100 0.34 23.44 4.06 0.34 8.96
10 samples MCYT 1.25 13.79 2.04 0.70 9.32
GPDS-100 0.68 11.29 1.99 0.46 6.21
Table 9
Results using LBPriu2
8,1 þLBPriu2
16,2 þGLCM whith BW signatures. Tested with skilled
forgeries.
Training Data set FAR (%) FRR (%) EER (%) FAR (s) FRR (s)
5 samples MCYT 3.93 26.20 12.84 3.35 8.91
GPDS-100 4.68 23.88 13.17 3.27 8.91
10 samples MCYT 7.86 13.92 9.37 2.87 8.65
GPDS-100 9.00 11.03 9.75 3.75 6.46
Table 10
Comparison of proposed approach with other published methods.
EER (%) Value scale
[42] 25.10 Grey
[43] 22.40/20.00a
B/W
[44] 15.00 B/W
[10] 11.00/9.28a
B/W
[45] 10.18/6.44a
B/W
Approach proposed (LBP+GCLM) 12.02/8.80a
Grey
a
5/10 genuine samples used for training.
Table 11
Using contour-hinge parameters proposed in [45].
Algorithm EER
(%)
Reported in [45] with MCYT Corpus 10.18
Implemented here with a posteriori score normalization proposed in
[45] with MCYT Corpus
10.32
Implemented here without score normalization with MCYT Corpus 14.81
Implemented here without score normalization using GPDS Corpus 15.17
Table 12
Results using LBPriu2
8,1 þLBPriu2
16,2 þcontour-hinge. Tested with random forgeries.
Training Data set FAR (%) FRR (%) EER (%) FAR (s) FRR (s)
5 samples MCYT 0.01 26.47 3.16 0.02 7.59
GPDS-100 0.01 22.99 3.71 0.01 6.17
10 samples MCYT 0.04 8.45 0.57 0.03 6.28
GPDS-100 0.02 7.76 0.98 0.03 4.17
Table 14
Results using LBPriu2
8,1 þLBPriu2
16,2 þGLCMþcontour-hinge. Tested with random forgeries.
Training Data set FAR (%) FRR (%) EER (%) FAR (s) FRR (s)
5 samples MCYT 0.03 20.17 2.43 0.03 6.73
GPDS-100 0.01 18.26 2.95 0.02 5.52
10 samples MCYT 0.15 5.07 0.47 0.15 4.93
GPDS-100 0.06 5.56 0.74 0.06 3.05
Table 13
Results using GLCM+contour-hinge. Tested with random forgeries.
Training Data set FAR (%) FRR (%) EER (%) FAR (s) FRR (s)
5 samples MCYT 0.02 27.75 3.32 0.03 7.58
GPDS-100 0.00 23.29 3.75 0.01 5.44
10 samples MCYT 0.05 9.12 0.62 0.06 6.81
GPDS-100 0.02 8.42 1.06 0.04 4.02
J.F. Vargas et al. / Pattern Recognition 44 (2011) 375–385 383
others signers in the database) as negatives samples. We should
note that in this case the results with the GPDS Corpus are worse
than those for the MCYT Corpus because of the counter-hinge
performance.
8. Conclusions
A new off-line signature verification methodology based on
grey level information is described. The performance of the
system is presented with reference to two experimental signature
databases containing samples from 75 and 100 individuals,
including skilled forgeries. The experimental results for skilled
forgeries (Tables 3 and 5) show that using grey level information
achieves reasonable system performance EER¼16.27% and
12.82%, when LBPriu2
8,1 þLBPriu2
16,2 and GLCM features are used for
the MCYT Corpus. Overall system performance is improved when
a feature-level fusion of LBPriu2
8,1 þLBPriu2
16,2 +GLCM features is im-
plemented. These latter results compare well with the current
state-of-the-art (Table 10). A combination of the proposed
approaches LBPriu2
8,1 þLBPriu2
16,2 þGLCM and the contour based ap-
proach proposed in [45] leads to further performance improve-
ment, especially in the case of random forgeries. We suggest that
score-level and decision-level fusions should be studied in the
future.
Additionally, a simple and low computational cost segmenta-
tion algorithm has been proposed based on posterisation.
Although a procedure to reduce the effect of ink-type was
presented, more efforts need to be made in that particular
direction in order to improve this stage of our system. Never-
theless, comparing the similar results obtained with MCYT and
GPDS databases with the grey level based features, it seems that
the proposed features display some invariance to pen type
because the MCYT Corpus has been made with the same pen
and the GPDS Corpus with different pens.
Acknowledgments
This work has been funded by Spanish government MCINN
TEC2009-14123-C04 research project; F. Vargas is supported
by the high level scholarships programme, Programme AlBan
No. E05D049748CO.
References
[1] K. Bowyer, V. Govindaraju, N. Ratha, Introduction to the special issue on
recent advances in biometric systems, IEEE Transactions on Systems, Man
and Cybernetics—B 37 (5) (2007) 1091–1095.
[2] D. Zhang, J. Campbell, D. Maltoni, R. Bolle, Special issue on biometric systems,
IEEE Transactions on Systems, Man and Cybernetics—C 35 (3) (2005) 273–275.
[3] S. Prabhakar, J. Kittler, D. Maltoni, L. O’Gorman, T. Tan, Introduction to the special
issue on biometrics: progress and directions, PAMI 29 (4) (2007) 513–516.
[4] S. Liu, M. Silverman, A practical guide to biometric security technology, IEEE
IT Professional 3 (1) (2001) 27–32.
[5] R. Plamondon, S. Srihari, On-line and off-line handwriting recognition: a
comprehensive survey, IEEE Transactions on Pattern Analysis and Machine
Intelligence 22 (1) (2000) 63–84.
[6] K. Franke, J.R. del Solar, M. Köpen, Soft-biometrics: soft computing for
biometric-applications, Tech. Rep. IPK, 2003.
[7] S. Impedovo, G. Pirlo, Verification of handwritten signatures: an overview, in:
ICIAP ’07: Proceedings of the 14th International Conference on Image
Analysis and Processing, IEEE Computer Society, Washington, DC, USA,
2007, pp. 191–196, doi:http://dx.doi.org/10.1109/ICIAP.2007.131.
[8] R. Plamondon, in: Progress in Automatic Signature Verification, World
Scientific Publications, 1994.
[9] M. Fairhurst, New perspectives in automatic signature verification, Tech. Rep.
1, Information Security Technical Report, 1998.
[10] J. Fierrez-Aguilar, N. Alonso-Hermira, G. Moreno-Marquez, J. Ortega- Garcia,
An off-line signature verification system based on fusion of local and global
information, in: Workshop on Biometric Authentication, Springer LNCS-3087,
2004, pp. 298–306.
[11] Y. Kato, M. Yasuhara, Recovery of drawing order from single-stroke hand-
writing images, IEEE Transactions on Pattern Analysis and Machine
Intelligence, 22(9) (2000).
[12] S. Lee, J. Pan, Offline tracking and representation of signatures, IEEE
Transactions on Systems, Man and Cybernetics 22 (4) (1992) 755–771.
[13] N. Herbst, C. Liu, Automatic signature verification based on accelerometry,
Tech. Rep., IBM Journal of Research Development, 1977.
[14] C. Sansone, M. Vento, Signature verification: increasing performance by a multi-
stage system, Pattern Analysis  Applications, Springer 3 (2000) 169–181.
[15] H. Cardot, M. Revenu, B. Victorri, M. Revillet, A static signature verification
system based on a cooperative neural network architecture, International
Journal on Pattern Recognition and Artificial Intelligence 8 (3) (1994) 679–692.
[16] K. Franke, O. Bünnemeyer, T. Sy, Ink texture analysis for writer identification,
in: IWFHR ’02: Proceedings of the Eighth International Workshop on
Frontiers in Handwriting Recognition (IWFHR’02), IEEE Computer Society,
Washington, DC, USA, 2002, p. 268.
[17] K. Franke, S. Rose, Ink-deposition model: the relation of writing and ink
deposition processes, in: IWFHR ’04: Proceedings of the Ninth International
Workshop on Frontiers in Handwriting Recognition, IEEE Computer Society,
Washington, DC, USA, 2004, pp. 173–178, doi:http://dx.doi.org/10.1109/
IWFHR.2004.59.
[18] Y. Qiao, M. Yasuhara, Recovering dynamic information from static handwritten
images, in: Frontiers on Handwritten Recognition 04, 2004, pp. 118–123.
[19] A. El-Baati, A.M. Alimi, M. Charfi, A. Ennaji, Recovery of temporal information
from off-line arabic handwritten, in: AICCSA ’05: Proceedings of the ACS/IEEE
2005 International Conference on Computer Systems and Applications, IEEE
Computer Society, Washington, DC, USA, 2005, pp. 127–vii.
[20] R. Plamondon, W. Guerfali, The 2/3 power law: when and why? Acta
Psychologica 100 (1998) 85–96 12.
[21] M. Ammar, Y. Yoshida, T. Fukumura, A new effective approach for automatic
off-line verification of signatures by using pressure features, in: Proceedings
8th International Conference on Pattern Recognition, 1986, pp. 566–569.
[22] D. Doermann, A. Rosenfeld, Recovery of temporal information from static
images of handwriting, International Journal of Computer Vision 15 (1–2)
(1995) 143–164.
[23] J. Guo, D. Doermann, A. Rosenfeld, Forgery detection by local correspondence,
International Journal of Pattern Recognition and Artificial Intelligence 15
(579–641) (2001) 4.
[24] L. Oliveira, E. Justino, C. Freitas, R. Sabourin, The graphology applied to
signature verification, in: 12th Conference of the International Graphonomics
Society, 2005, pp. 286–290.
[25] M. Ferrer, J. Alonso, C. Travieso, Offline geometric parameters for automatic
signature verification using fixed-point arithmetic, IEEE Transactions on
Pattern Analysis and Machine Intelligence 27 (6) (2005) 993–997.
[26] K. Huang, H. Yan, Off-line signature verification based on geometric feature
extraction and neural network classification, Pattern Recognition 30 (1)
(1997) 9–17.
Table 15
Results using LBPriu2
8,1 þLBPriu2
16,2 þcontour-hinge. Tested with skilled forgeries.
Training Data set FAR (%) FRR (%) EER (%) FAR (s) FRR (s)
5 samples MCYT 2.21 26.43 11.90 1.66 8.27
GPDS-100 4.71 24.36 13.40 3.16 7.05
10 samples MCYT 6.54 8.69 7.08 2.28 6.38
GPDS-100 13.67 8.08 11.61 3.86 4.24
Table 16
Results using GLCM+contour-hinge. Tested with skilled forgeries.
Training Data set FAR (%) FRR (%) EER (%) FAR (s) FRR (s)
5 samples MCYT 1.64 33.31 14.31 1.29 7.68
GPDS-100 4.40 28.64 15.11 3.04 6.91
10 samples MCYT 5.51 13.31 7.46 2.30 8.25
GPDS-100 11.90 11.52 11.76 4.42 5.72
Table 17
Results using LBPriu2
8,1 þLBPriu2
16,2 þGLCMþcontour-hinge. Tested with skilled forgeries.
Training Data set FAR (%) FRR (%) EER (%) FAR (s) FRR (s)
5 samples MCYT 2.71 24.13 11.28 1.62 7.83
GPDS-100 4.79 23.09 12.88 2.74 6.68
10 samples MCYT 6.77 8.59 7.23 2.45 6.87
GPDS-100 13.13 7.46 11.04 3.86 3.91
J.F. Vargas et al. / Pattern Recognition 44 (2011) 375–385
384
[27] H. Lv, W. Wang, C. Wang, Q. Zhuo, Off-line Chinese signature verification based
on support vector machine, Pattern Recognition Letters 26 (2005) 2390–2399.
[28] A. Mitra, P. Kumar, C. Ardil, Automatic authentification of handwritten
documents via low density pixel measurements, International Journal of
Computational Intelligence 2 (4) (2005) 219–223.
[29] J. Vargas, M. Ferrer, C. Travieso, J. Alonso, Off-line signature verification based
on high pressure polar distribution, in: ICFHR08, Montereal, 2008.
[30] K. Franke, Stroke-morphology analysis using super-imposed writing move-
ments, in: IWCF, 2008, pp. 204–217.
[31] R.W. Conners, C.A. Harlow, A theoretical comparison of texture algorithms, IEEE
Transactions on Pattern Analysis and Machine Intelligence 2 (3) (1980) 204–222.
[32] R.M. Haralick, Statistical and structural approaches to texture, Proceedings of
the IEEE 67 (5) (1979) 786–804.
[33] D. He, L. Wang, J. Guibert, Texture feature extraction, Pattern Recognition
Letters 6 (4) (1987) 269–273.
[34] M. Trivedi, C. Harlow, R. Conners, S. Goh, Object detection based on gray level
cooccurrence, Computer Vision, Graphics and Image Processing 28 (3) (1984)
199–219.
[35] S. Marcel, Y. Rodriguez, G. Heusch, On the recent use of local binary patterns
for face authentication, International Journal on Image and Video Processing,
Special Issue on Facial Image Processing, IDIAP-RR 06-34, 2007.
[36] S. Nikam, S. Agarwal, Texture and wavelet-based spoof fingerprint detection
for fingerprint biometric systems, in: ICETET ’08: Proceedings of the 2008
First International Conference on Emerging Trends in Engineering and
Technology, IEEE Computer Society, Washington, DC, USA, 2008, pp. 675–680,
doi:http://dx.doi.org/10.1109/ICETET.2008.134.
[37] T. Mäenpää, The local binary pattern approach to texture analysis—exten-
sions and applications., Ph.D. thesis, Oulu University, Dissertation, Acta Univ.
Oulu C 187, 78p+App., 2003, /http://herkules.oulu.fi/isbn9514270762/S.
[38] T. Ojala, M. Pietikainen, T. Maenpaa, Multiresolution gray-scale and
rotation invariant texture classification with local binary patterns, IEEE
Transactions on Pattern Analysis and Machine Intelligence 24 (7) (2002)
971–987.
[39] A.J. Mansfield, J.L. Wayman, Best Practices in Testing and Reporting
Performance of Biometric Devices Version 2.01, National Physical Laboratory,
San Jose State University NPL Report CMSC 14/02, August 2002.
[40] J.A.K. Suykens, T.V. Gestel, J.D. Brabanter, B.D. Moor, J. Vandewalle, in: Least
Squares Support Vector Machines, World Scientific Publishing Co. Pte. Ltd., 2002.
[41] D. Bertolini, L. Oliveira, E. Justino, R. Sabourin, Reducing forgeries in writer-
independent off-line signature verification through ensemble of classifiers,
Pattern Recognition 43 (1) (2009) 387–396.
[42] I. Güler, M. Meghdadi, A different approach to off-line handwritten signature
verification using the optimal dynamic time warping algorithm, Digital Signal
Processing 18 (6) (2008) 940–950.
[43] F. Alonso-Fernandez, M.C. Fairhurst, J. Fierrez, J. Ortega-Garcia, Automatic
measures for predicting performance in off-line signature, in: IEEE Proceed-
ings of the International Conference on Image Processing, ICIP, vol. 1, 2007,
pp. 369–372.
[44] J. Wen, B. Fang, Y. Tang, T. Zhang, Model-based signature verification with
rotation invariant features, Pattern Recognition 42 (7) (2009) 1458–1466.
[45] A. Gilperez, F. Alonso-Fernandez, S. Pecharroman, J. Fierrez, J. Ortega- Garcia,
Off-line signature verification using contour features, in: Proceedings of
the International Conference on Frontiers in Handwriting Recognition, ICFHR,
2008.
[46] M. Bulacu, Statistical pattern recognition for automatic writer identification
and verification, Ph.D. thesis, Artificial Intelligence Institute, University of
Groningen, The Netherlands, March 2007, /http://www.ai.rug.nl/bulacu/S.
Jesus F. Vargas was born in Colombia in 1978. He received his B.Sc. degree in Electronic Engineering in 2001 and M.Sc. degree in Industrial Automation in 2003, both from
Universidad Nacional de Colombia. Since 2004, he is an Auxiliar Professor at Universidad de Antioquia, Colombia. He is currently a PhD student at Technological Centre for
Innovation in Communications (CeTIC, Universidad de Las Palmas de Gran Canaria, Spain). His research deals with offline signature verification.
Carlos M. Travieso-Gonzalez received his M.Sc. degree in 1997 in Telecommunication Engineering at Polytechnic University of Catalonia (UPC), Spain. Besides, he received
Ph.D. degree in 2002 at ULPGC-Spain. He is an Associate Professor from 2001 in ULPGC, teaching subjects on signal processing. His research lines are biometrics,
classification system, environmental intelligence, and data mining. He is a reviewer in international journals and conferences. Besides, he is an Image Processing Technical
IASTED Committee member.
Jesus B. Alonso received his M.Sc. degree in 2001 in Telecommunication Engineering and Ph.D. degree in 2006, both from the Department of Computers and Systems at
Universidad de Las Palmas de Gran Canaria (ULPGC), Spain. He is an Associate Professor at Universidad de Las Palmas de Gran Canaria from 2002. His interests include
signal processing in biocomputing, nonlinear signal processing, recognition systems, and data mining.
Miguel A. Ferrer was born in Spain in 1965. He received his M.Sc. degree in Telecommunications in 1988 and Ph.D. in 1994, both from the Universidad Politécnica de
Madrid, Spain. He is an Associate Professor at Universidad de Las Palmas de Gran Canaria, where he has taught since 1990 and heads the Digital Signal Processing Group
there. His research interests lie in the fields of biometrics and audio-quality evaluation. He is a member of the IEEE Carnahan Conference on Security Technology Advisory
Committee.
J.F. Vargas et al. / Pattern Recognition 44 (2011) 375–385 385

More Related Content

Similar to vargas2011.pdf

An offline signature verification using pixels intensity levels
An offline signature verification using pixels intensity levelsAn offline signature verification using pixels intensity levels
An offline signature verification using pixels intensity levelsSalam Shah
 
An Investigation into Biometric Signature Capture Device Performance and User...
An Investigation into Biometric Signature Capture Device Performance and User...An Investigation into Biometric Signature Capture Device Performance and User...
An Investigation into Biometric Signature Capture Device Performance and User...International Center for Biometric Research
 
Handwritten Signature Verification using Artificial Neural Network
Handwritten Signature Verification using Artificial Neural NetworkHandwritten Signature Verification using Artificial Neural Network
Handwritten Signature Verification using Artificial Neural NetworkEditor IJMTER
 
(2006) The Challenge of Forgeries and Perception of Dynamic Signature Verific...
(2006) The Challenge of Forgeries and Perception of Dynamic Signature Verific...(2006) The Challenge of Forgeries and Perception of Dynamic Signature Verific...
(2006) The Challenge of Forgeries and Perception of Dynamic Signature Verific...International Center for Biometric Research
 
IJSRED-V2I2P33
IJSRED-V2I2P33IJSRED-V2I2P33
IJSRED-V2I2P33IJSRED
 
Offline Handwritten Signature Identification and Verification using Multi-Res...
Offline Handwritten Signature Identification and Verification using Multi-Res...Offline Handwritten Signature Identification and Verification using Multi-Res...
Offline Handwritten Signature Identification and Verification using Multi-Res...CSCJournals
 
Paulino fengjain latentfp_matching_descriptorbasedhoughtransform_ijcb11
Paulino fengjain latentfp_matching_descriptorbasedhoughtransform_ijcb11Paulino fengjain latentfp_matching_descriptorbasedhoughtransform_ijcb11
Paulino fengjain latentfp_matching_descriptorbasedhoughtransform_ijcb11vishakhmarari
 
Freeman Chain Code (FCC) Representation in Signature Fraud Detection Based On...
Freeman Chain Code (FCC) Representation in Signature Fraud Detection Based On...Freeman Chain Code (FCC) Representation in Signature Fraud Detection Based On...
Freeman Chain Code (FCC) Representation in Signature Fraud Detection Based On...CSCJournals
 
A Novel Automated Approach for Offline Signature Verification Based on Shape ...
A Novel Automated Approach for Offline Signature Verification Based on Shape ...A Novel Automated Approach for Offline Signature Verification Based on Shape ...
A Novel Automated Approach for Offline Signature Verification Based on Shape ...Editor IJCATR
 
(2011) Dynamic Signature Verifi cation and the Human Biometric Sensor Interac...
(2011) Dynamic Signature Verifi cation and the Human Biometric Sensor Interac...(2011) Dynamic Signature Verifi cation and the Human Biometric Sensor Interac...
(2011) Dynamic Signature Verifi cation and the Human Biometric Sensor Interac...International Center for Biometric Research
 
Signature verification based on proposed fast hyper deep neural network
Signature verification based on proposed fast hyper deep neural networkSignature verification based on proposed fast hyper deep neural network
Signature verification based on proposed fast hyper deep neural networkIAESIJAI
 
Automatic authentication-of-handwritten-documents-via-low-density-pixel-measu...
Automatic authentication-of-handwritten-documents-via-low-density-pixel-measu...Automatic authentication-of-handwritten-documents-via-low-density-pixel-measu...
Automatic authentication-of-handwritten-documents-via-low-density-pixel-measu...Cemal Ardil
 
Digital Ethics for Biometric Applications in a Smart City
Digital Ethics for Biometric Applications in a Smart CityDigital Ethics for Biometric Applications in a Smart City
Digital Ethics for Biometric Applications in a Smart CityAraz Taeihagh
 
IRJET - An Enhanced Signature Verification System using KNN
IRJET - An Enhanced Signature Verification System using KNNIRJET - An Enhanced Signature Verification System using KNN
IRJET - An Enhanced Signature Verification System using KNNIRJET Journal
 
Fingerprint detection and face recognition for colonization control of fronti...
Fingerprint detection and face recognition for colonization control of fronti...Fingerprint detection and face recognition for colonization control of fronti...
Fingerprint detection and face recognition for colonization control of fronti...DR.P.S.JAGADEESH KUMAR
 
An Efficient Fingerprint Identification using Neural Network and BAT Algorithm
An Efficient Fingerprint Identification using Neural Network and BAT Algorithm An Efficient Fingerprint Identification using Neural Network and BAT Algorithm
An Efficient Fingerprint Identification using Neural Network and BAT Algorithm IJECEIAES
 
Artificial Intelligence Based Bank Cheque Signature Verification System
Artificial Intelligence Based Bank Cheque Signature Verification SystemArtificial Intelligence Based Bank Cheque Signature Verification System
Artificial Intelligence Based Bank Cheque Signature Verification SystemIRJET Journal
 
IRJET- Handwritten Signature Verification using Local Binary Pattern Features...
IRJET- Handwritten Signature Verification using Local Binary Pattern Features...IRJET- Handwritten Signature Verification using Local Binary Pattern Features...
IRJET- Handwritten Signature Verification using Local Binary Pattern Features...IRJET Journal
 

Similar to vargas2011.pdf (20)

An offline signature verification using pixels intensity levels
An offline signature verification using pixels intensity levelsAn offline signature verification using pixels intensity levels
An offline signature verification using pixels intensity levels
 
An Investigation into Biometric Signature Capture Device Performance and User...
An Investigation into Biometric Signature Capture Device Performance and User...An Investigation into Biometric Signature Capture Device Performance and User...
An Investigation into Biometric Signature Capture Device Performance and User...
 
Handwritten Signature Verification using Artificial Neural Network
Handwritten Signature Verification using Artificial Neural NetworkHandwritten Signature Verification using Artificial Neural Network
Handwritten Signature Verification using Artificial Neural Network
 
(2007) The Challenges Associated with Laboratory-Based Distance Education
(2007) The Challenges Associated with Laboratory-Based Distance Education(2007) The Challenges Associated with Laboratory-Based Distance Education
(2007) The Challenges Associated with Laboratory-Based Distance Education
 
(2006) The Challenge of Forgeries and Perception of Dynamic Signature Verific...
(2006) The Challenge of Forgeries and Perception of Dynamic Signature Verific...(2006) The Challenge of Forgeries and Perception of Dynamic Signature Verific...
(2006) The Challenge of Forgeries and Perception of Dynamic Signature Verific...
 
IJSRED-V2I2P33
IJSRED-V2I2P33IJSRED-V2I2P33
IJSRED-V2I2P33
 
Offline Handwritten Signature Identification and Verification using Multi-Res...
Offline Handwritten Signature Identification and Verification using Multi-Res...Offline Handwritten Signature Identification and Verification using Multi-Res...
Offline Handwritten Signature Identification and Verification using Multi-Res...
 
Paulino fengjain latentfp_matching_descriptorbasedhoughtransform_ijcb11
Paulino fengjain latentfp_matching_descriptorbasedhoughtransform_ijcb11Paulino fengjain latentfp_matching_descriptorbasedhoughtransform_ijcb11
Paulino fengjain latentfp_matching_descriptorbasedhoughtransform_ijcb11
 
Freeman Chain Code (FCC) Representation in Signature Fraud Detection Based On...
Freeman Chain Code (FCC) Representation in Signature Fraud Detection Based On...Freeman Chain Code (FCC) Representation in Signature Fraud Detection Based On...
Freeman Chain Code (FCC) Representation in Signature Fraud Detection Based On...
 
A Novel Automated Approach for Offline Signature Verification Based on Shape ...
A Novel Automated Approach for Offline Signature Verification Based on Shape ...A Novel Automated Approach for Offline Signature Verification Based on Shape ...
A Novel Automated Approach for Offline Signature Verification Based on Shape ...
 
(2011) Dynamic Signature Verifi cation and the Human Biometric Sensor Interac...
(2011) Dynamic Signature Verifi cation and the Human Biometric Sensor Interac...(2011) Dynamic Signature Verifi cation and the Human Biometric Sensor Interac...
(2011) Dynamic Signature Verifi cation and the Human Biometric Sensor Interac...
 
Signature verification based on proposed fast hyper deep neural network
Signature verification based on proposed fast hyper deep neural networkSignature verification based on proposed fast hyper deep neural network
Signature verification based on proposed fast hyper deep neural network
 
Automatic authentication-of-handwritten-documents-via-low-density-pixel-measu...
Automatic authentication-of-handwritten-documents-via-low-density-pixel-measu...Automatic authentication-of-handwritten-documents-via-low-density-pixel-measu...
Automatic authentication-of-handwritten-documents-via-low-density-pixel-measu...
 
Digital Ethics for Biometric Applications in a Smart City
Digital Ethics for Biometric Applications in a Smart CityDigital Ethics for Biometric Applications in a Smart City
Digital Ethics for Biometric Applications in a Smart City
 
IRJET - An Enhanced Signature Verification System using KNN
IRJET - An Enhanced Signature Verification System using KNNIRJET - An Enhanced Signature Verification System using KNN
IRJET - An Enhanced Signature Verification System using KNN
 
Fingerprint detection and face recognition for colonization control of fronti...
Fingerprint detection and face recognition for colonization control of fronti...Fingerprint detection and face recognition for colonization control of fronti...
Fingerprint detection and face recognition for colonization control of fronti...
 
An Efficient Fingerprint Identification using Neural Network and BAT Algorithm
An Efficient Fingerprint Identification using Neural Network and BAT Algorithm An Efficient Fingerprint Identification using Neural Network and BAT Algorithm
An Efficient Fingerprint Identification using Neural Network and BAT Algorithm
 
Biometrics Security System
Biometrics Security SystemBiometrics Security System
Biometrics Security System
 
Artificial Intelligence Based Bank Cheque Signature Verification System
Artificial Intelligence Based Bank Cheque Signature Verification SystemArtificial Intelligence Based Bank Cheque Signature Verification System
Artificial Intelligence Based Bank Cheque Signature Verification System
 
IRJET- Handwritten Signature Verification using Local Binary Pattern Features...
IRJET- Handwritten Signature Verification using Local Binary Pattern Features...IRJET- Handwritten Signature Verification using Local Binary Pattern Features...
IRJET- Handwritten Signature Verification using Local Binary Pattern Features...
 

More from HouBou3

ylmaz2016.pdf
ylmaz2016.pdfylmaz2016.pdf
ylmaz2016.pdfHouBou3
 
Cours Benhabiles TMS320.pdf
Cours Benhabiles TMS320.pdfCours Benhabiles TMS320.pdf
Cours Benhabiles TMS320.pdfHouBou3
 
CoursImageProcessing1.pdf
CoursImageProcessing1.pdfCoursImageProcessing1.pdf
CoursImageProcessing1.pdfHouBou3
 
lip6.2001.021.pdf
lip6.2001.021.pdflip6.2001.021.pdf
lip6.2001.021.pdfHouBou3
 
Digital_Signal_Processors_TG_FULL.pdf
Digital_Signal_Processors_TG_FULL.pdfDigital_Signal_Processors_TG_FULL.pdf
Digital_Signal_Processors_TG_FULL.pdfHouBou3
 
Circuits-programmables.pdf
Circuits-programmables.pdfCircuits-programmables.pdf
Circuits-programmables.pdfHouBou3
 
DSP FPGA.pdf
DSP FPGA.pdfDSP FPGA.pdf
DSP FPGA.pdfHouBou3
 
Tool combination for the description of steel surface image and defect class...
 Tool combination for the description of steel surface image and defect class... Tool combination for the description of steel surface image and defect class...
Tool combination for the description of steel surface image and defect class...HouBou3
 

More from HouBou3 (9)

ylmaz2016.pdf
ylmaz2016.pdfylmaz2016.pdf
ylmaz2016.pdf
 
Cours Benhabiles TMS320.pdf
Cours Benhabiles TMS320.pdfCours Benhabiles TMS320.pdf
Cours Benhabiles TMS320.pdf
 
TI.pdf
TI.pdfTI.pdf
TI.pdf
 
CoursImageProcessing1.pdf
CoursImageProcessing1.pdfCoursImageProcessing1.pdf
CoursImageProcessing1.pdf
 
lip6.2001.021.pdf
lip6.2001.021.pdflip6.2001.021.pdf
lip6.2001.021.pdf
 
Digital_Signal_Processors_TG_FULL.pdf
Digital_Signal_Processors_TG_FULL.pdfDigital_Signal_Processors_TG_FULL.pdf
Digital_Signal_Processors_TG_FULL.pdf
 
Circuits-programmables.pdf
Circuits-programmables.pdfCircuits-programmables.pdf
Circuits-programmables.pdf
 
DSP FPGA.pdf
DSP FPGA.pdfDSP FPGA.pdf
DSP FPGA.pdf
 
Tool combination for the description of steel surface image and defect class...
 Tool combination for the description of steel surface image and defect class... Tool combination for the description of steel surface image and defect class...
Tool combination for the description of steel surface image and defect class...
 

Recently uploaded

Analog to Digital and Digital to Analog Converter
Analog to Digital and Digital to Analog ConverterAnalog to Digital and Digital to Analog Converter
Analog to Digital and Digital to Analog ConverterAbhinavSharma374939
 
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escortsranjana rawat
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Dr.Costas Sachpazis
 
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...ranjana rawat
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile servicerehmti665
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxpurnimasatapathy1234
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxupamatechverse
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSKurinjimalarL3
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escortsranjana rawat
 
Processing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxProcessing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxpranjaldaimarysona
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINESIVASHANKAR N
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130Suhani Kapoor
 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSCAESB
 

Recently uploaded (20)

Analog to Digital and Digital to Analog Converter
Analog to Digital and Digital to Analog ConverterAnalog to Digital and Digital to Analog Converter
Analog to Digital and Digital to Analog Converter
 
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
 
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile service
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptx
 
Roadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and RoutesRoadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and Routes
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptx
 
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINEDJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
 
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCRCall Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
 
Processing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxProcessing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptx
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentation
 

vargas2011.pdf

  • 1. Off-line signature verification based on grey level information using texture features J.F. Vargas b,n , M.A. Ferrer a , C.M. Travieso a , J.B. Alonso a a Instituto para el desarrollo tecnológico y la Innovación en Comunicaciones (IDeTIC), Universidad de Las Palmas de Gran Canaria, Tafira Campus 35017, Las Palmas, Spain b Electronic Engineering Department,(GEPAR),Universidad de Antioquia, Medellin, Colombia a r t i c l e i n f o Article history: Received 7 April 2010 Received in revised form 11 June 2010 Accepted 29 July 2010 Keywords: Off-line handwritten signature verification Pattern recognition Grey level information Texture features Co-occurrence matrix Local binary pattern LS-SVM a b s t r a c t A method for conducting off-line handwritten signature verification is described. It works at the global image level and measures the grey level variations in the image using statistical texture features. The co-occurrence matrix and local binary pattern are analysed and used as features. This method begins with a proposed background removal. A histogram is also processed to reduce the influence of different writing ink pens used by signers. Genuine samples and random forgeries have been used to train an SVM model and random and skilled forgeries have been used for testing it. Results are reasonable according to the state-of-the-art and approaches that use the same two databases: MCYT-75 and GPDS- 100 Corpuses. The combination of the proposed features and those proposed by other authors, based on geometric information, also promises improvements in performance. & 2010 Elsevier Ltd. All rights reserved. 1. Introduction The security requirements of today’s society have placed biometrics at the centre of an ongoing debate concerning its key role in a multitude of applications [1–3]. Biometrics measure individuals’ unique physical or behavioural characteristics with the aim of recognising or authenticating identity. Common physical biometrics include fingerprints, hand or palm geometry, retina, iris, or facial characteristics. Behavioural characteristics include signa- ture, voice (which also has a physical component), keystroke pattern, and gait. Signature and voice technologies are examples of this class of biometrics and are the most developed [4]. The handwritten signature is recognised as one of the most widely accepted personal attributes for identity verification. This signature is a symbol of consent and authorisation, especially in the credit card and bank checks environment, and has been an attractive target for fraud for a long time. There is a growing demand for the processing of individual identification to be faster and more accurate, and the design of an automatic signature verification system is a real challenge. Plamondon and Srihari [5] noted that automatic signature verification systems occupy a very specific niche among other automatic identification systems: ‘‘On the one hand, they differ from systems based on the possession of something (key, card, etc.) or the knowledge of something (passwords, personal information, etc.), because they rely on a specific, well learned gesture. On the other hand, they also differ from systems based on the biometric properties of an individual (fingerprints, voice prints, retinal prints, etc.), because the signature is still the most socially and legally accepted means of personal identification.’’ A comparison of signature verification with other recognition technologies (fingerprint, face, voice, retina, and iris scanning) reveals that signature verification has several advantages as an identity verification mechanism. Firstly, signature analysis can only be applied when the person is/was conscious and willing to write in the usual manner, although it is possible that individuals may be forced to submit the handwriting sample. To give a counter example, a fingerprint may also be used when the person is in an unconscious (e.g. drugged) state. Forging a signature is deemed to be more difficult than forging a fingerprint, given the availability of sophisticated analyses [6]. Unfortunately, signature verification is a difficult discrimination problem since a hand- written signature is the result of a complex process depending on the physical and psychological conditions of the signer, as well as the conditions of the signing process [7]. The net result is that a signature is a strong variable entity and its verification, even for human experts, is not a trivial matter. The scientific challenges and the valuable applications of signature verification have Contents lists available at ScienceDirect journal homepage: www.elsevier.com/locate/pr Pattern Recognition 0031-3203/$ - see front matter & 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.patcog.2010.07.028 n Corresponding author. Tel.: +34 928 451269; fax: +34 928 451243. E-mail addresses: jfvargas@udea.edu.co (J.F. Vargas), mferrer@dsc.ulpgc.es (M.A. Ferrer), ctravieso@dsc.ulpgc.es (C.M. Travieso), jalonso@dsc.ulpgc.es (J.B. Alonso). Pattern Recognition 44 (2011) 375–385
  • 2. attracted many researchers from universities and the private sector to signature verification. Undoubtedly, automatic signature verification plays an important role in the set of biometric techniques for personal verification [8,9]. In the present study, we focus on features based on grey level information from images containing handwritten signatures, especially those providing information about ink distribution along traces delineating the signature. Textural analysis meth- odologies are included for this purpose since they provide rotation and luminance invariance. The paper is organised as follows: Section 2 presents the background to off-line signature verification. Section 3 provides an overview of statistical texture analysis. Section 4 describes the approach proposed. Section 5 presents details about the database. Section 6 is devoted to the classifiers. Section 7 presents the evaluation protocol and reports the experimental results. The paper ends with concluding remarks. 2. Background There are two major methods of signature verification. One is an on-line method to measure sequential data, such as handwriting speed and pen pressure, with a special device. The other is an off-line method that uses an optical scanner to obtain handwriting data written on paper. There are two main approaches for off-line signature verification: the static approach and pseudo-dynamic approach. The static one involves geometric measures of the signature while the pseudo-dynamic one tries to estimate dynamic information from the static image [10]. On-line systems use special input devices such as tablets, while off-line approaches are much more difficult because the only available information is a static two-dimensional image obtained by scanning pre-written signatures on a paper; the dynamic information of the pen-tip (stylus) movement such as pen-tip coordinates, pressure, velocity, acceleration, and pen-up and pen- down can be captured by a tablet in real time but not by an image scanner. The off-line method, therefore, needs to apply complex image processing techniques to segments and analyse signature shape for feature extraction [11]. Hence, on-line signature verification is potentially more successful. Nevertheless, off-line systems have a significant advantage in that they do not require access to special processing devices when the signatures are produced. In fact, if the accuracy of verification systems is stressed, the off-line method has much more practical application areas than that of the on-line one. Consequently, an increase in amount of research has studied feature-extraction methodology for off-line signature recognition and verification [12]. It is also true that the track of the pen shows a great deal of variability. No two genuine signatures are ever exactly the same. Actually, two identical signatures would constitute legal evidence of forgery by tracing. The normal variability of signatures constitutes the greatest obstacle to be met in achieving automatic verification. Signatures vary in their complexity, duration, and vulnerability to forgery. Signers vary in their coordination and consistency. Thus, the security of the system varies from user to user. A short, common name is no doubt easier to forge than a long, carefully written name, no matter what technique is employed. Therefore, a system must be capable of ‘‘degrading’’ gracefully when supplied with inconsistent signatures, and the security risks must be kept to acceptable levels [13]. Problems of signature verification are addressed by taking into account three different types of forgeries: random forgeries, produced without knowing either the name of the signer nor the shape of its signature; simple forgeries, produced knowing the name of the signer but without having an example of his signature; and skilled forgeries, produced by people who, after studying an original instance of the signature, attempt to imitate it as closely as possible. Clearly, the problem of signature verification becomes more and more difficult when passing from random to simple and skilled forgeries, the latter being so difficult a task that even human beings make errors in several cases. Indeed, exercises in imitating a signature often allow us to produce forgeries so similar to the originals that discrimination is practically impossible; in many cases, the distinction is compli- cated even more by the large variability introduced by some signers when writing their own signatures [14]. For instance, studies on signature shape found that North American signatures are typically more stylistic in contrast to the highly personalised and ‘‘variable in shape’’ European ones [15]. 2.1. Off-line signature verification based on pseudo-dynamic features Dynamic information cannot be derived directly from static signature images. Instead, some features can be derived that partly represent dynamic information. These special character- istics are referred to as pseudo-dynamic information. The term ‘‘pseudo-dynamic’’ is used to distinguish real dynamic data, recorded during the writing process, from information, which can be reconstructed from the static image [15]. There are different approaches to the reconstruction of dynamic information from static handwriting records. Techniques from the field of forensic document examination are mainly based on the microscopic inspection of the writing trace and assump- tions about the underlying writing process [16]. Another paper from the same author [17] describes their studies on the influence of physical and bio-mechanical processes on the ink trace and aims at providing a solid foundation for enhanced signature analysis procedures. Simulated human handwriting movements are considered by means of a writing robot to study the relationship between writing process characteristics and ink deposit on paper. Approaches from the field of image processing and pattern recognition can be divided into: methods for estimating the temporal order of stroke production [18,19]; methods inspired by motor control theory, which recover temporal features on the basis of stroke geometries such as curvature [20]; and finally, methods analysing stroke thickness and/or stroke intensity variations [21–25]. An analysis of mainly grey level distribution, in accordance with methods of the last group, is reported in this paper. A grey level image of a scanned handwritten signature indicates that some pixels may represent shapes written with high pressure, which appear as darker zones. High pressure points (HPPs) can be defined as those signature pixels which have grey level values greater than a suitable threshold. The study of high pressure features was proposed by Ammar et al. [21] to indicate regions where more physical effort was made by the signer. This idea of calculating a threshold to find the HPP was adopted and developed by others researchers [26,14]. Lv et al. [27] set two thresholds to store only the foreground points and edge points. They analyse only the remaining points whose grey level value is between the two thresholds and divide them into 12 segments. The percentage of the points whose grey level value falls in the corresponding segment is one of the values of the feature vector that reflects the grey level distribution. Lv and co-workers also consider stroke width distribution. In order to analyse not only HPPs but also low pressure points (LPP) a complementary threshold has been proposed by Mitra et al. [28]. In a previous work, we use a radial and angular partition (RAP) for a local analysis to determine the ratio, over each cell, between HPPs and all points conforming the J.F. Vargas et al. / Pattern Recognition 44 (2011) 375–385 376
  • 3. binary version of the image [29]. Franke [30] evaluates ink-trace characteristics that are affected by the interaction of bio- mechanical writing and physical ink-deposition processes. The analysis focused on the ink intensity, which is captured along the entire writing trace of a signature. The adaptive segmentation of ink-intensity distributions takes the influences of different writing instruments into account and supports the cross-valida- tion of different pen probes. In this way, texture analysis of ink trace appears as an interesting approach to characterise personal writing for enhanced handwritten signature verification procedures. 3. Statistical texture analysis Statistical texture analysis requires the computation of texture features from the statistical distribution of observed combina- tions of intensities at specified positions relative to each other in an image. The number of intensity points (pixels) in each combination is identified, leading to the classification of the texture statistics as first-order, second-order, or higher-order. Biometric systems based on signature verification, in conjunc- tion with textural analysis, can reveal information about ink- pixels distribution, which reflects personal characteristics from the signer, i.e. pen-holding, writing speed, and pressure. But we do not think that only ink distribution information is sufficient for signer identification. So, in the specific case of signature strokes, we have also taken into account, for the textural analysis, the pixels in the stroke contour. By this we mean those stroke pixels that are in the signature-background border. These pixels will include statistical information about the signature shape. So this distribution data may be considered as a combination of textural and shape information. 3.1. Statistical features of first order Statistical features of first order, as represented in a histogram, take into account the individual grey level value of each pixel in an image Iðx,yÞ,1rxrN,1ryrM, but the spatial arrangement is not considered, i.e. different spatial features can have the same level histogram. A classical way of parameterising the histogram is to measure its average and standard deviation. Obviously, the discriminative ability of first order statistics is really low for automatic signature verification, especially when user and forger use a similar writing instrument. In fact, most researchers normalise the histogram, so as to reduce the noise for the subsequent processing of the signature. 3.2. Grey level co-occurrence matrices The grey level co-occurrence matrix (GLCM) method is a way of extracting second order statistical texture features from the image [31]. This approach has been used in a number of applications, including ink type analysis [16], e.g. [32–34]. A GLCM of an image I(x,y) is a matrix Pði,jDx,DyÞ,0rirG1, 0rjrG1, where the number of rows and columns are equal to the number of grey levels G. The matrix element P(i, j9Dx, Dy) is the relative frequency with which two pixels with grey levels i and j occur separated by a pixel distance (Dx, Dy). For simplicity, in the rest of the paper, we will denote the GLCM matrix as P(i, j). For a statistically reliable estimation of the relative frequency we need a sufficiently large number of occurrences for each event. The reliability of P(i, j) depends on the grey level number G and the I(x, y) image size. In the case of images containing signatures, instead of image size, this depends on the number of pixels in the signature strokes. If the statistical reliability is not sufficient, we need to reduce G to guarantee the minimum number of pixels transitions per P(i, j) matrix component, despite losing texture description accuracy. The grey level number G can be reduced easily by quantifying the image I(x, y). The classical feature measures extracted from the GLCM matrix (see Haralick [32] and Conners and Harlow [31]) are the following: Texture homogeneity H: H ¼ X G1 i ¼ 0 X G1 j ¼ 0 fPði,jÞg2 ð1Þ A homogeneous scene will contain only a few grey levels, giving a GLCM with only a few but relatively high values of P(i, j). Thus, the sum of squares will be high. Texture contrast C: C ¼ X G1 n ¼ 0 n2 X G1 i ¼ 0 X G1 j ¼ 0 Pði,jÞ 8 : 9 = ; , 9ij9 ¼ n ð2Þ This measure of local intensity variation will favour contribu- tions from P(i, j) away from the diagonal, i.e iaj. Texture entropy E: E ¼ X G1 i ¼ 0 X G1 j ¼ 0 Pði,jÞlogfPði,jÞg ð3Þ Non-homogeneous scenes have low first order entropy, while a homogeneous scene reveals high entropy. Texture correlation O: O ¼ X G1 i ¼ 0 X G1 j ¼ 0 ijPði,jÞðmimjÞ sisj ð4Þ where mi and si are the mean and standard deviation of P(i, j) rows, and mj and sj the mean and standard deviation of P(i, j) columns, respectively. Correlation is a measure of grey level linear dependence between pixels at the specified positions relative to each other. 3.3. Local binary patterns The local binary pattern (LBP) operator is defined as a grey level invariant texture measure, derived from a general definition of texture in a local neighbourhood, the centre of which is the pixel (x, y). Recent extensions of the LBP operator have shown it to be a really powerful measure of image texture, producing excellent results in many empirical studies. LBP has been applied in biometrics to the specific problem of face recognition [35,36]. The LBP operator can be seen as a unifying approach to the traditionally divergent statistical and structural models of texture analysis. Perhaps the most important property of the LBP operator in real-world applications is its invariance to monotonic grey level changes. Equally important is its computational simplicity, which makes it possible to analyse images in challenging real-time settings [37]. The local binary pattern operator describes the surroundings of the pixel (x, y) by generating a bit-code from the binary derivatives of a pixel as a complementary measure for local image contrast. The original LBP operator takes the eight neighbouring pixels using the centre grey level value I(x, y) as a threshold. The operator generates a binary code 1 if the neighbour is greater than or equal to the central level, otherwise it generates a binary code 0. The eight neighbouring binary codes can be represented by an 8-bit number. The LBP operator outputs for all the pixels in the image can be accumulated to form a histogram, which represents a measure of the image texture. Fig. 1 shows an example of a LBP operator. J.F. Vargas et al. / Pattern Recognition 44 (2011) 375–385 377
  • 4. The above LBP operator is extended in [38] to a generalised grey level and rotation invariant operator. The generalised LBP operator is derived on the basis of a circularly symmetric neighbour set of P members on a circle of radius R. The parameter P controls the quantisation of the angular space and R determines the spatial resolution of the operator. The LBP code of central pixel (x, y) with P neighbours and radius R is defined as LPBP,Rðx,yÞ ¼ X P1 p ¼ 0 sðgpgcÞ2p ð5Þ where sðlÞ ¼ 1 lZ0 0 lo0 , ( the unit step function, gc the grey level value of the central pixel: gc¼I(x, y), and gp the grey level of the pth neighbour, defined as gp ¼ I xþRsin 2pp P ,yRcos 2pp P ð6Þ If the pth neighbour does not fall exactly in the pixel position, its grey level is estimated by interpolation. An example can be seen in Fig. 2. In a further step, [38] defines a LBPP,R operator invariant to rotation as follows: LBPriu2 P,R ðx,yÞ ¼ X P1 p ¼ 0 sðgpgcÞ if Uðx,yÞr2 Pþ1 otherwise 8 : ð7Þ where Uðx,yÞ ¼ X P p ¼ 1 sðgpgcÞsðgp1gcÞ , with gP ¼ g0 ð8Þ Analysing the above equations, U(x, y) can be calculated as follows: (1) work out the function fðpÞ ¼ sðgpgcÞ,0opoP considering gP¼g0; (2) obtain its derivate: fðpÞf ðp1Þ,1rprP; (3) calculate the absolute value: 9f ðpÞfðp1Þ9,1rprP; and (4) obtain U(x, y) as the integration or sum PP P ¼ 1 9fðpÞfðp1Þ9. If the grey levels of the pixel (x, y) neighbours are uniform or smooth, as in the case of Fig. 3, left, f(p) will be a sequence of ‘‘0’’ or ‘‘1’’ with only two transitions. In this case U(x, y) will be zero or two and the LBPriu2 P,R code is worked out as the sum PP1 p ¼ 0 fðpÞ. Conversely, if the surrounding grey levels of pixel (x, y) vary quickly, as in the case of Fig. 3, right, f(p) will be a sequence containing several transitions ‘‘0’’–‘‘1’’ or ‘‘1’’–‘‘0’’ and U(x, y) will be greater than 2. So, in the noisy case, a constant value equal to P+1 is assigned to LBPriu2 P,R making it more robust to noise than previously defined LBP operators. The rotation invariance property is guaranteed because when summing the f(p) sequence to obtain the LBPriu2 P,R , it is not weighted by 2p . As f(p) is a sequence of 0 and 1, 0rLBPriu2 P,R ðx,yÞrPþ1. As textural measure, we will use its P+2 histogram bins of LBPriu2 P,R ðx,yÞ codes. From the three LBP codes proposed in this section, LBP, LBPP,R, and LBPriu2 P,R , we will use LBPriu2 P,R in this paper, because of its property of rotational invariance. 4. Textural analysis for signature verification The analysis of the writing trace in signatures becomes an application area of textural analysis. The textural features from the grey level image can reveal personal characteristics of the signer (i.e. pressure and speed changes, pen-holding, etc.) complementing classical features proposed in the literature. In this section we describe a basic scheme for using textural analysis in automatic signature verification. Fig. 1. Working out the LBP code of pixel (x, y). In this case I(x, y)¼3, and its LBP code is LBP(x, y)¼143. Fig. 2. The surroundings of I(x, y) central pixel are displayed along with the pth neighbours, marked with black circles, for different P and R values. Left: P¼4, R¼1, and the LPB4,1(x, y) code is obtained by comparing gc ¼I(x, y) with gp¼ 0 ¼I(x, y1), gp¼ 1 ¼I(x+1, y), gp ¼2 ¼I(x, y+1), and gp ¼ 3¼I(x1, y). Centre: P¼4, R¼2, and the LPB4,2(x, y) code is obtained by comparing gc ¼I(x, y) with gp ¼0 ¼I(x, y2), gp¼ 1 ¼I(x+2, y), gp¼ 2 ¼I(x, y+2), and gp ¼ 3¼I(x2, y). Right: P¼8, R¼2, and the LPB8,2(x, y) code is obtained by comparing gc ¼I(x, y) with gp ¼ 0¼I(x, y2), gp ¼ 1 ¼ Iðxþ ffiffiffi 2 p ,y ffiffiffi 2 p Þ, gp¼ 2 ¼I(x+2, y), gp ¼ 3 ¼ Iðxþ ffiffiffi 2 p ,yþ ffiffiffi 2 p Þ, gp¼ 4 ¼I(x, y+2), gp ¼ 5 ¼ Iðx ffiffiffi 2 p ,yþ ffiffiffi 2 p Þ, gp ¼ 6¼I(x2, y), and gp ¼ 7 ¼ Iðx ffiffiffi 2 p ,y ffiffiffi 2 p Þ. Fig. 3. Calculating the LBPriu2 P,R code for two cases, with P¼4 and R¼2. Left: gc¼152, {g0, g1, g2, g3}¼{154, 156, 155, 149}, {f(0), f(1), f(2), f(3), f(4)}¼{1, 1, 1, 0, 1}, and U(x, y)¼0+0+1+1¼2r2, therefore LBPriu2 P,R ðx,yÞ ¼ 1þ1þ1þ0 ¼ 3. Right: gc ¼154, g0,g1,g2,g3 ¼ 155,152,159,148 f g, {f(0) ,f(1) ,f(2) ,f(3) ,f(4)}¼{1 ,0, 1, 0, 1}, U(x, y)¼1+1+1+1¼4Z2, and LBPriu2 P,R ðx,yÞ ¼ Pþ1 ¼ 5. (a) Smooth and uniform grey level change and (b) noisy grey level surroundings. The numbers and the shade intensity represent the grey levels J.F. Vargas et al. / Pattern Recognition 44 (2011) 375–385 378
  • 5. 4.1. Background removal The features used in our system characterise the grey level distribution in an image signature but also require a procedure for background elimination. Grey levels corresponding to the back- ground are not discriminating information but adding noise can negatively affect the characterisation. In this work, we have used a simple posterisation procedure to avoid background influence. Obviously, any other efficient segmentation procedure would also be useful. Posterisation occurs when an image apparent bit depth has been decreased so much that it has a visual impact. The term ‘‘posterisation’’ is used because it can influence the image in a similar way to the colour range in a mass-produced poster, where the print process uses a limited number of coloured inks. Let I(x, y) be a 256-level grey scale image and nL +1 the number of grey levels considered for posterisation. The posterised image IP(x, y) is defined as follows: IPðx,yÞ ¼ round round Iðx,yÞnL 255 255 nL ð9Þ where round(U) rounds the elements to the nearest integers. The interior round performs the posterisation operation, and the exterior round guarantees that the resulting grey level of IP(x, y) is an integer. In the results presented in this paper, with MCYT and GPDS Corpuses, we have used a value of nL¼3 obtaining a 4-grey level posterised image, the grey levels being 0, 85, 170, and 255. Perceptually, valid values can be nL¼3 or 4. With values of nL ¼ 1 or 2 the signature is half erased and this is not a valid segmentation. With a value of nL ¼3 the signature strokes are well preserved and the background appears nearly clean. With values of nL 43, mainly in the MCYT Corpus, more and more salt and pepper noise appears in the background. In order to avoid posterior image processing and eliminate the salt and pepper noise, a value of nL ¼3 was selected. The images from both corpuses consist of dark strokes against a white background. In the posterised image the background appears white (grey level equal to 255) and the signature strokes appear darker (grey levels equal to 0, 85, or 170). Therefore, to obtain the Ibw(x, y) binarised signature (black strokes and white background) we apply a simple thresholding operation, as follows: Ibwðx,yÞ ¼ 255 if IPðx,yÞ ¼ 255 0 otherwise ð10Þ The black and white Ibw(x, y) image is used as a mask to segment the original signature and the segmented signature is obtained as ISðx,yÞ ¼ 255 if Ibwðx,yÞ ¼ 255 Iðx,yÞ otherwise ( ð11Þ At this point, a complete segmentation between background and foreground is achieved. An example of the above described procedure can be seen in Fig. 4. 4.2. Histogram displacement This section is aimed at reducing the influence of the different writing ink pens on the segmented signature. We achieve this by displacing the histogram of the signature pixels toward zero, keeping the background white with grey level equal to 255. Assuring that the grey level value of the darkest signature pixel is always 0, the dynamic range will reflect features only of the writing style. This can be carried out by subtracting the minimum grey level value in the image from the signature pixels, as follows: IGðx,yÞ ¼ ISðx,yÞ if ISðx,yÞ ¼ 255 ISðx,yÞminfISðx,yÞg otherwise ( ð12Þ where IG(x, y) is the segmented image histogram displaced toward zero. Fig. 5 illustrates the effect of this displacement. 4.3. Feature extraction After the segmentation and signature histogram displacement, the image is cropped to fix the signature size and it is resized to N¼512 and M¼512. The aim of these adjustments is to improve the scale invariance. As an interpolation method, we use the nearest neighbour. This is in order to keep the ink texture as invariant as possible. 4.3.1. GLCM features To calculate GLCM features, we have to assume the statistic significance of the Pði,j9Dx,DyÞ,0ri,jrG1 GLCM matrix estima- tion. If we follow the rule of 3 [39], which supposes an independent, identical distribution, a 1% estimation error with a 95% of confidence limit will require at least 300 samples per component. As P(i, j) contains G2 components, the number of pixel transitions that we will need for a reliable estimation of all the P(i, j) components will be 300 G2 . The number of signature pixels for each signature in our databases has been worked out in its histogram, depicted in Fig. 6. To guarantee statistical significance at the 98% level for the signatures in the databases, we work out the 2nd percentile that corresponds to 23,155 pixels. Then, the number of grey levels should be 23,1554300 G2 -Go ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 23,155=300 p ¼ 8:78 ð13Þ We need to take into account that the number of grey levels G is an integer. So, in order to obtain a reliable estimation of the GLCM matrix, the signature images will be quantified to G¼8 grey levels to calculate the P(i, j) matrix, despite losing texture resolution. Experiments with 16 and 32 grey levels have also Fig. 4. Posterisation procedure: (a) original image I(x, y) with 256 grey levels, (b) posterised image IP(x, y) with nL¼3: 4 grey levels, (c) binarised image Ibwðx,yÞ, and (d) segmented image Is(x, y): original signature with the background converted to white (grey level equal to 255). J.F. Vargas et al. / Pattern Recognition 44 (2011) 375–385 379
  • 6. been performed and the resulting final equal error rate has confirmed that it is preferable to have a reliable GLCM matrix estimation than to increase the texture resolution. The quantified image IQ(x, y) is obtained from IG(x, y) as follows: IQ ðx,yÞ ¼ round fix IGðx,yÞG 255 255 G ð14Þ where fix rounds toward zero, and the exterior round is to guarantee integer grey levels in the IQ(x, y) image. Once the signature image with G¼8 grey levels has been quantified, four GLCM matrices of size G G ¼ 8 8 ¼ 64 are worked out: P1 ¼ Pði,j9Dx ¼ 1,Dy ¼ 0Þ, P2 ¼ Pði,j9Dx ¼ 1,Dy ¼ 1Þ, P3 ¼ Pði,j9Dx ¼ 0,Dy ¼ 1Þ, and P4 ¼ Pði,j9Dx ¼ 1,Dy ¼ 1Þ. These GLCM matrices correspond to joint probability matrices that relate the grey level of the central pixel (x, y) with the pixel on its right (x+1, y), right and above (x+1, y+1), above (x, y+1), and left and above (x1, y+1). We do not need to work out more GLCM matrices because, for instance, the relation of pixel (x, y) with pixel (x1, y1) is taken into account when the central pixel is at (x1, y1). The textural measures obtained for each GLCM matrix are the following: homogeneity, contrast, entropy, and correlation, all of which are defined in Section 3. So we have 16 textural measures (4 measures of 4 different matrices) to calculate. These are reduced to 8, following the suggestion of Haralick [32]. Suppose that Hi,Ci,Ei, and Oi are the homogeneity, contrast, entropy, and correlation textural measures, respectively, of Pi,1rir4. We define the 4-element vector M containing the average of each textural measure as M ¼ mean 1 ri r4 Hi, mean 1r ir 4 Ci, mean 1r ir 4 Ei, mean 1r ir 4 Oi ð15Þ where the ‘‘mean’’ of the vector is mean 1r ir 4 Hi ¼ 1 4 X 4 i ¼ 1 Hi ð16Þ and the four-component vector R, containing the range of each textural measure, is R ¼ range 1 ri r4 Hi, range 1r ir 4 Ci, range 1r ir 4 Ei, range 1r ir 4 Oi ( ) ð17Þ where the ‘‘range’’ is the difference between the maximum and the minimum values, i.e. range 1r ir 4 Hi ¼ max 1r ir 4 Hi min 1r ir 4 Hi ð18Þ The eight-components feature vector is obtained by concate- nating the M and R vectors: GLCM Feature Vector ¼ fM,Rg ð19Þ Fig. 5. Histogram preprocessing. Upper: histogram and signature detail of image IS(x, y), lower: histogram and signature detail of IG(x, y), which is darker than IS(x, y). Note that IS(x, y) histogram finishes abruptly at grey level 213 because of the posterisation process with nL¼3: as roundð212nL=255Þ ¼ 2, pixels with grey level 212 remain within the signature stroke, and as roundð213nL=255Þ ¼ 3, pixels with grey level 213 go to the background. Fig. 6. Number of signature pixel histograms for both databases considered in this paper. J.F. Vargas et al. / Pattern Recognition 44 (2011) 375–385 380
  • 7. 4.3.2. LBP features To extract the feature set of the signature image IG(x, y) based on LBP, we have chosen the rotation invariant operator LBPriu2 P,R defined in Section 3. We have studied two cases. For the first case we consider P¼8 and R¼1, which obtain the LBPriu2 8,1 ðx,yÞ code, thresholding each pixel with the 8 neighbouring pixels. The proposed feature vector is the normalized histogram of LBPriu2 8,1 ðx,yÞ. As 0rLBPriu2 8,1 ðx,yÞrPþ1 ¼ 9, the histogram is calcu- lated with 10 bins as follows: hisLBP8,1ðx,yÞðlÞ ¼ # n ðx,yÞ LBPriu2 8,1 ðx,yÞ ¼ l o 1rxrN ¼ 512 1ryrM ¼ 512 0rlrPþ1 ¼ 9 ð20Þ where # means ‘‘number of times’’. The normalized histogram is obtained from hisLBP8,1 Feature VectorðlÞ ¼ hisLBP8,1ðx,yÞðlÞ PP þ1 l ¼ 0 hisLBP8,1ðx,yÞðlÞ , 0rlrPþ1 ¼ 9 ð21Þ For the second case analysed, the feature vector is obtained from the rotation invariant LBPriu2 P,R code with P¼16 and R¼2. In this case LBPriu2 16,2ðx,yÞ, we consider the second ring around the (x, y) pixel. As 0rLBPriu2 16,2ðx,yÞrPþ1 ¼ 17, the normalized histogram will contain 18 bins, and the feature vector will be LBPriu2 16,2 Feature Vector ðlÞ ¼ hisLBP16,2ðx,yÞðlÞ PP þ1 l ¼ 0 hisLBP16,2ðx,yÞðlÞ , 0rlrPþ1 ¼ 17 ð22Þ It should be noted that by including the pixels in the border of the signature in the GLCM and LBPriu2 P,R matrices, both matrices will include a statistical measure of the signature shape. This means how many pixels in the signature border are oriented north, north west, etc. This results from the background having a grey level equal to 255 (224 in the case of GLCM because of the quantisation with G¼8). 5. Database We have used two databases for testing the proposed grey level based features. Both have been scanned at 600 dpi, which guarantees a sufficient grey texture representation. The main differences between them are the pens used. In the MCYT database all the signers, genuine, and forger are signed with the same pen on the same surface. Instead, in the GPDS database, all the users signed with their own pens on different surfaces. So, similar results with both databases will point to a measure of ink independence of the proposed features. 5.1. GPDS-100 Corpus The GPDS-100 signature corpus contains 24 genuine signa- tures and 24 forgeries of 100 individuals [25], producing 100 24¼2400 genuine signatures and the same for forgeries. The genuine signatures were taken in just one session to avoid scheduling difficulties. The repetitions of each genuine signature and forgery specimen were collected using each participant’s own pen on white A4 sheets of paper, featuring two different box sizes: the first box is 5 cm wide and 1.8 cm high and the second box is 4.5 cm wide and 2.5 cm high. Half of the genuine and forged specimens were written in each size of box. The forgeries were collected on a form with 15 boxes. Each forger form shows 5 images of different genuine signatures chosen randomly. The forger imitated each one 3 times for all 5 signatures. Forgers were given unlimited time to learn the signatures and perform the forgeries. The complete signing process was supervised by an operator. Once the signature forms were collected, each form was scanned with a Canon device using 256-level grey scale and 600 dpi resolution. All the signature images were saved in PNG format. 5.2. MCYT Corpus The off-line subcorpus of the MCYT signature database [10] was used. The whole corpus comprises fingerprint and on-line signature data for 330 contributors from 4 different Spanish sites. Skilled forgeries are also available in the case of signature data. Forgers are given the signature images of clients to be forged and, after training, they are asked to imitate the shape. Signature data were always acquired with the same ink pen and paper templates over a pen tablet. Therefore, signature images are also available on paper. Paper templates of 75 signers (and their associated skilled forgeries) have been digitised with a scanner at 600 dpi. The resulting off-line subcorpus has 2250 images of signatures, with 15 genuine signatures and 15 forgeries per user. This signature corpus is publicly available at http://atvs.ii.uam.es. 6. Classification Once the feature matrix is estimated, we need to solve a two- class classification (genuine or forgery) problem. A brief descrip- tion of the classification technique used in the verification stage follows. 6.1. Least squares support vector machines To model each signature, a least squares support vector Machine (LS-SVM) has been used. SVMs have been introduced within the context of statistical learning theory and structural risk minimisation. Least squares support vector machines (LS-SVM) are reformulations to standard SVMs, which lead to solving indefinite linear (KKT) systems. Robustness, sparseness, and weightings can be imposed on LS-SVMs where needed and a Bayesian framework with three levels of inference has been developed [40] for this purpose. Only one linear equation has to be solved in the optimization process, which not only simplifies the process, but also avoids the problem of local minima in SVM. The LS-SVM model is defined in its primal weight space by ^ yðxÞ ¼ xT jðxÞþb ð23Þ where j(x) is a function that maps the input space into a higher dimensional feature space, x is the M-dimensional vector, and x and b the parameters of the model. Given N input–output learning pairs ðxi ,yi ÞARM xR,1rirN, least squares support vector ma- chines seek the x and b that minimize min x,b,e Jðo,eÞ ¼ 1 2 xT xþg 1 2 X N i ¼ 1 e2 i ð24Þ subject to yi ¼ xT jðxi Þþbþei , 1rirN ð25Þ In our case we use as j(x) mapping function a Gaussian RBF kernel. The meta parameters of the LS-SVM model are the width C of the Gaussian and the g regularisation factor. The training method for the estimation of x and b can be found in [40]. In this work, the meta parameters (g, C) were established using a grid search. The LS-SVM trained for each signer uses the same (g, C) J.F. Vargas et al. / Pattern Recognition 44 (2011) 375–385 381
  • 8. meta parameters; further details about model construction are given in the next section. 7. Evaluation protocol 7.1. Experiments Each signer is modelled by a LS-SVM, which is trained with 5 and 10 genuine samples so as to compare the performance of the model with the number of training samples. These samples were chosen randomly. Random forgeries (genuine samples from other signers) were used as negative samples, in a similar way to that outlined by [41], in our case taking a genuine sample of each one of the other users of the database (74 for the case of the MCYT Corpus and 99 for the GPDS Corpus). Keeping in mind the limited number of samples in the training, leave-one-out cross-validation (LOOCV) was used to determine the parameters of the SVM classifier with RBF kernel (g, C). For testing, random and skilled forgeries were taken into account. For random forgeries, we select a genuine sample of each one of the other users of the database (different to the one used for training). For skilled forgeries, all available forgeries were used; this is 15 for the MCYT Corpus and 24 for the GPDS Corpus. Training and testing procedure was repeated 10 times with different training and testing subsets for the purpose of obtaining reliable results. Two classical types of error were considered: Type I error or false rejection rate (FRR), which is when an authentic signature is rejected, and Type II error or false acceptance rate (FAR), which is when a forgery is accepted. Finally the equal error rate (EER) was calculated, keeping in mind that the classes are unbalanced. To calculate FAR and FRR we need to define a threshold. As the LS-SVM has been trained as a target value +1 for genuine signature and 1 for forgeries, we have chosen an a priori constant threshold equal to 0 for all the signers, i.e. if the LS-SVM returns a value greater or equal than 0, the signature is accepted as genuine. If the LS-SVM returns a value lesser than 0, the signature is considered a forgery and consequently rejected. 7.2. Results Experiments were carried out using different values for LBPriu2 P,R parameters R and P. First, values were set to R¼1 and P¼8. Then they were set to R¼2 and P¼16. Finally, a combination at feature level of both pairs was used. Table 1 shows results obtained using 5 genuine samples for training and evaluating with skilled forgeries. As can be seen the best results were obtained using the combination LBPriu2 8,1 þLBPriu2 16,2. This makes sense, because the new feature vector of length 10+18¼28 includes information on the first and second pixels rings around the central pixel. Tables 2 and 3 show more detailed information about results obtained with LBPriu2 8,1 þLBPriu2 16,2. Tables 4 and 5 present results for GLCM characterisation. In order to study the system performance when using a combination of LBPriu2 8,1 þLBPriu2 16,2and GLCM parameters, a feature level fusion was carried out to obtain a feature vector of dimension 10+18+8, equal to 36. Tables 6 and 7 present the results obtained for random and skilled forgeries, respectively. It is easy to see that the EER decreases when combining the different grey level based features. So it seems that LBPriu2 P,R and GLCM texture measures are uncorrelated, which is logical, because each texture measure is based on a different principle: LBPriu2 P,R is based on thresholding and GLCM is based on joint statistics. As stated above, the quality of the texture based parameters is not solely due to the discriminative ability of the texture to identify writers. The texture parameters, as defined, also include shape information when including pixels in the stroke border. To Table 1 Results using LBPriu2 P,R . Trained with 5 samples and tested with skilled forgeries. LBPriu2 P,R parameters Data set FAR (%) FRR (%) EER (%) R¼1, P¼8 MCYT 3.35 30.72 14.30 R¼2, P¼16 MCYT 3.17 28.37 13.25 {R¼1, P¼8}+{R¼2, P¼16} MCYT 5.00 24.56 12.82 R¼1, P¼8 GPDS-10 3.90 32.51 16.54 R¼2, P¼16 GPDS-10 4.24 30.07 15.66 {R¼1, P¼8}+{R¼2, P¼16} GPDS-10 6.17 22.49 13.38 Table 2 Results using LBPriu2 8,1 þLBPriu2 16,2. Tested with random forgeries. Training Data set FAR (%) FRR (%) EER (%) FAR (s) FRR (s) 5 samples MCYT 0.75 26.40 3.81 0.74 9.22 GPDS-100 0.36 26.64 4.59 0.40 8.40 10 samples MCYT 1.52 15.23 2.38 0.82 9.32 GPDS-100 0.73 14.29 2.41 0.50 6.52 Table 3 Results using LBPriu2 8,1 þLBPriu2 16,2. Tested with skilled forgeries. Training Data set FAR (%) FRR (%) EER (%) FAR (s) FRR (s) 5 samples MCYT 5.00 24.56 12.82 3.79 9.70 GPDS-100 6.17 22.49 13.38 3.90 8.29 10 samples MCYT 9.84 13.20 10.68 3.70 8.65 GPDS-100 10.05 11.36 10.53 3.76 5.77 Table 4 Results using GLCM based features. Tested with random forgeries. Training Data set FAR (%) FRR (%) EER (%) FAR (s) FRR (s) 5 samples MCYT 3.12 32.76 6.65 2.19 7.49 GPDS-100 0.46 37.39 6.40 0.45 6.12 10 samples MCYT 5.68 21.39 6.68 1.73 8.48 GPDS-100 1.19 26.34 4.31 0.74 7.11 Table 5 Results using GLCM based features. Tested with skilled forgeries. Training Data set FAR (%) FRR (%) EER (%) FAR (s) FRR (s) 5 samples MCYT 6.49 30.93 16.27 4.16 8.78 GPDS-100 2.91 35.07 17.12 2.34 7.29 10 samples MCYT 9.72 21.47 12.65 3.45 8.27 GPDS-100 4.92 24.61 12.18 2.54 7.24 Table 6 Results using LBPriu2 8,1 þLBPriu2 16,2 þGLCM. Tested with random forgeries. Training Data set FAR (%) FRR (%) EER (%) FAR (s) FRR (s) 5 samples MCYT 0.86 24.21 3.64 0.76 9.77 GPDS-100 0.27 21.87 3.75 0.34 9.62 10 samples MCYT 1.53 12.00 2.20 0.83 8.16 GPDS-100 0.55 10.35 1.76 0.43 5.83 J.F. Vargas et al. / Pattern Recognition 44 (2011) 375–385 382
  • 9. verify such an hypothesis, we have converted the signatures to black and white and worked out the LBP and GLCM matrices. The results are given in Tables 8 and 9. It can be seen that the results are just a little bit worse than those in Tables 6 and 7. This confirms that the texture features contain shape information and that the grey level data provide some information about the writer. A comparison of the performance of different signature verifica- tion systems is a difficult task since each author constructs his own signature data sets. The lack of a standard international signature database continues to be a major problem for performance comparison. For the sake of completeness, in Table 10 we present some results obtained by published studies that used the MCYT database. Although it is not possible to carry out a direct comparison of the results, since the methodologies of training and testing and the classification strategies used by each author are different, Table 10 enables one to visualise results from the proposed methodology alongside results published by other authors. The next step in analysing grey scale based features is to combine them with geometrical based features. It is supposed that the two types of features will be uncorrelated and their combination will improve the automatic handwritten signature verification (AHSV) scheme. For geometrical based features we have used the contour-hinge algorithm proposed in [46] and used in [45] for AHSV with the MCYT Corpus. Table 11 shows the results obtained by [45] using the contour-hinge algorithm and with our implementation of it. To compare the results, we need to take into account that in our work we have not use a score normalization, i.e. the threshold is 0 for all the users. On the other hand, [45] uses a user-dependent a posteriori score normalization, that is to say, their EER is an indication of the level of performance with an ideal score alignment between users. The score normalization used by [45] is as follows: s0 ¼ssl, where s is the raw similarity score computed by the signature matcher, s0 is the normalized similarity score, and sl is the user dependent decision threshold at the ERR obtained from a set of genuine and impostor scores for the user l. So, for a fair comparison we give our results with the score normalization of [45] and without score normalization. As can be seen the counter-hinge parameters work slightly better with the MCYT than with the GPDS Corpus. Tables 12–17 present results that confirm how features based on grey level information can be combined with features based on binary images to improve overall system performance. These tables offer information on the feature level combination of LBPriu2 8,1 þLBPriu2 16,2 þGLCMþcontour-hinge features. Again 5 and 10 genuine samples were used, respectively, in the training set for positive samples, and random forgeries (genuine samples from Table 7 Results using LBPriu2 8,1 þLBPriu2 16,2 þGLCM. Tested with skilled forgeries. Training Data set FAR (%) FRR (%) EER (%) FAR (s) FRR (s) 5 samples MCYT 4.53 23.25 12.02 3.55 9.26 GPDS-100 5.13 20.82 12.06 3.43 8.44 10 samples MCYT 7.53 12.61 8.80 3.96 9.66 GPDS-100 8.64 9.66 9.02 3.52 6.52 Table 8 Results using LBPriu2 8,1 þLBPriu2 16,2 þGLCM with BW signatures. Tested with random forgeries. Training Data set FAR (%) FRR (%) EER (%) FAR (s) FRR (s) 5 samples MCYT 0.57 26.44 3.65 0.58 9.42 GPDS-100 0.34 23.44 4.06 0.34 8.96 10 samples MCYT 1.25 13.79 2.04 0.70 9.32 GPDS-100 0.68 11.29 1.99 0.46 6.21 Table 9 Results using LBPriu2 8,1 þLBPriu2 16,2 þGLCM whith BW signatures. Tested with skilled forgeries. Training Data set FAR (%) FRR (%) EER (%) FAR (s) FRR (s) 5 samples MCYT 3.93 26.20 12.84 3.35 8.91 GPDS-100 4.68 23.88 13.17 3.27 8.91 10 samples MCYT 7.86 13.92 9.37 2.87 8.65 GPDS-100 9.00 11.03 9.75 3.75 6.46 Table 10 Comparison of proposed approach with other published methods. EER (%) Value scale [42] 25.10 Grey [43] 22.40/20.00a B/W [44] 15.00 B/W [10] 11.00/9.28a B/W [45] 10.18/6.44a B/W Approach proposed (LBP+GCLM) 12.02/8.80a Grey a 5/10 genuine samples used for training. Table 11 Using contour-hinge parameters proposed in [45]. Algorithm EER (%) Reported in [45] with MCYT Corpus 10.18 Implemented here with a posteriori score normalization proposed in [45] with MCYT Corpus 10.32 Implemented here without score normalization with MCYT Corpus 14.81 Implemented here without score normalization using GPDS Corpus 15.17 Table 12 Results using LBPriu2 8,1 þLBPriu2 16,2 þcontour-hinge. Tested with random forgeries. Training Data set FAR (%) FRR (%) EER (%) FAR (s) FRR (s) 5 samples MCYT 0.01 26.47 3.16 0.02 7.59 GPDS-100 0.01 22.99 3.71 0.01 6.17 10 samples MCYT 0.04 8.45 0.57 0.03 6.28 GPDS-100 0.02 7.76 0.98 0.03 4.17 Table 14 Results using LBPriu2 8,1 þLBPriu2 16,2 þGLCMþcontour-hinge. Tested with random forgeries. Training Data set FAR (%) FRR (%) EER (%) FAR (s) FRR (s) 5 samples MCYT 0.03 20.17 2.43 0.03 6.73 GPDS-100 0.01 18.26 2.95 0.02 5.52 10 samples MCYT 0.15 5.07 0.47 0.15 4.93 GPDS-100 0.06 5.56 0.74 0.06 3.05 Table 13 Results using GLCM+contour-hinge. Tested with random forgeries. Training Data set FAR (%) FRR (%) EER (%) FAR (s) FRR (s) 5 samples MCYT 0.02 27.75 3.32 0.03 7.58 GPDS-100 0.00 23.29 3.75 0.01 5.44 10 samples MCYT 0.05 9.12 0.62 0.06 6.81 GPDS-100 0.02 8.42 1.06 0.04 4.02 J.F. Vargas et al. / Pattern Recognition 44 (2011) 375–385 383
  • 10. others signers in the database) as negatives samples. We should note that in this case the results with the GPDS Corpus are worse than those for the MCYT Corpus because of the counter-hinge performance. 8. Conclusions A new off-line signature verification methodology based on grey level information is described. The performance of the system is presented with reference to two experimental signature databases containing samples from 75 and 100 individuals, including skilled forgeries. The experimental results for skilled forgeries (Tables 3 and 5) show that using grey level information achieves reasonable system performance EER¼16.27% and 12.82%, when LBPriu2 8,1 þLBPriu2 16,2 and GLCM features are used for the MCYT Corpus. Overall system performance is improved when a feature-level fusion of LBPriu2 8,1 þLBPriu2 16,2 +GLCM features is im- plemented. These latter results compare well with the current state-of-the-art (Table 10). A combination of the proposed approaches LBPriu2 8,1 þLBPriu2 16,2 þGLCM and the contour based ap- proach proposed in [45] leads to further performance improve- ment, especially in the case of random forgeries. We suggest that score-level and decision-level fusions should be studied in the future. Additionally, a simple and low computational cost segmenta- tion algorithm has been proposed based on posterisation. Although a procedure to reduce the effect of ink-type was presented, more efforts need to be made in that particular direction in order to improve this stage of our system. Never- theless, comparing the similar results obtained with MCYT and GPDS databases with the grey level based features, it seems that the proposed features display some invariance to pen type because the MCYT Corpus has been made with the same pen and the GPDS Corpus with different pens. Acknowledgments This work has been funded by Spanish government MCINN TEC2009-14123-C04 research project; F. Vargas is supported by the high level scholarships programme, Programme AlBan No. E05D049748CO. References [1] K. Bowyer, V. Govindaraju, N. Ratha, Introduction to the special issue on recent advances in biometric systems, IEEE Transactions on Systems, Man and Cybernetics—B 37 (5) (2007) 1091–1095. [2] D. Zhang, J. Campbell, D. Maltoni, R. Bolle, Special issue on biometric systems, IEEE Transactions on Systems, Man and Cybernetics—C 35 (3) (2005) 273–275. [3] S. Prabhakar, J. Kittler, D. Maltoni, L. O’Gorman, T. Tan, Introduction to the special issue on biometrics: progress and directions, PAMI 29 (4) (2007) 513–516. [4] S. Liu, M. Silverman, A practical guide to biometric security technology, IEEE IT Professional 3 (1) (2001) 27–32. [5] R. Plamondon, S. Srihari, On-line and off-line handwriting recognition: a comprehensive survey, IEEE Transactions on Pattern Analysis and Machine Intelligence 22 (1) (2000) 63–84. [6] K. Franke, J.R. del Solar, M. Köpen, Soft-biometrics: soft computing for biometric-applications, Tech. Rep. IPK, 2003. [7] S. Impedovo, G. Pirlo, Verification of handwritten signatures: an overview, in: ICIAP ’07: Proceedings of the 14th International Conference on Image Analysis and Processing, IEEE Computer Society, Washington, DC, USA, 2007, pp. 191–196, doi:http://dx.doi.org/10.1109/ICIAP.2007.131. [8] R. Plamondon, in: Progress in Automatic Signature Verification, World Scientific Publications, 1994. [9] M. Fairhurst, New perspectives in automatic signature verification, Tech. Rep. 1, Information Security Technical Report, 1998. [10] J. Fierrez-Aguilar, N. Alonso-Hermira, G. Moreno-Marquez, J. Ortega- Garcia, An off-line signature verification system based on fusion of local and global information, in: Workshop on Biometric Authentication, Springer LNCS-3087, 2004, pp. 298–306. [11] Y. Kato, M. Yasuhara, Recovery of drawing order from single-stroke hand- writing images, IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(9) (2000). [12] S. Lee, J. Pan, Offline tracking and representation of signatures, IEEE Transactions on Systems, Man and Cybernetics 22 (4) (1992) 755–771. [13] N. Herbst, C. Liu, Automatic signature verification based on accelerometry, Tech. Rep., IBM Journal of Research Development, 1977. [14] C. Sansone, M. Vento, Signature verification: increasing performance by a multi- stage system, Pattern Analysis Applications, Springer 3 (2000) 169–181. [15] H. Cardot, M. Revenu, B. Victorri, M. Revillet, A static signature verification system based on a cooperative neural network architecture, International Journal on Pattern Recognition and Artificial Intelligence 8 (3) (1994) 679–692. [16] K. Franke, O. Bünnemeyer, T. Sy, Ink texture analysis for writer identification, in: IWFHR ’02: Proceedings of the Eighth International Workshop on Frontiers in Handwriting Recognition (IWFHR’02), IEEE Computer Society, Washington, DC, USA, 2002, p. 268. [17] K. Franke, S. Rose, Ink-deposition model: the relation of writing and ink deposition processes, in: IWFHR ’04: Proceedings of the Ninth International Workshop on Frontiers in Handwriting Recognition, IEEE Computer Society, Washington, DC, USA, 2004, pp. 173–178, doi:http://dx.doi.org/10.1109/ IWFHR.2004.59. [18] Y. Qiao, M. Yasuhara, Recovering dynamic information from static handwritten images, in: Frontiers on Handwritten Recognition 04, 2004, pp. 118–123. [19] A. El-Baati, A.M. Alimi, M. Charfi, A. Ennaji, Recovery of temporal information from off-line arabic handwritten, in: AICCSA ’05: Proceedings of the ACS/IEEE 2005 International Conference on Computer Systems and Applications, IEEE Computer Society, Washington, DC, USA, 2005, pp. 127–vii. [20] R. Plamondon, W. Guerfali, The 2/3 power law: when and why? Acta Psychologica 100 (1998) 85–96 12. [21] M. Ammar, Y. Yoshida, T. Fukumura, A new effective approach for automatic off-line verification of signatures by using pressure features, in: Proceedings 8th International Conference on Pattern Recognition, 1986, pp. 566–569. [22] D. Doermann, A. Rosenfeld, Recovery of temporal information from static images of handwriting, International Journal of Computer Vision 15 (1–2) (1995) 143–164. [23] J. Guo, D. Doermann, A. Rosenfeld, Forgery detection by local correspondence, International Journal of Pattern Recognition and Artificial Intelligence 15 (579–641) (2001) 4. [24] L. Oliveira, E. Justino, C. Freitas, R. Sabourin, The graphology applied to signature verification, in: 12th Conference of the International Graphonomics Society, 2005, pp. 286–290. [25] M. Ferrer, J. Alonso, C. Travieso, Offline geometric parameters for automatic signature verification using fixed-point arithmetic, IEEE Transactions on Pattern Analysis and Machine Intelligence 27 (6) (2005) 993–997. [26] K. Huang, H. Yan, Off-line signature verification based on geometric feature extraction and neural network classification, Pattern Recognition 30 (1) (1997) 9–17. Table 15 Results using LBPriu2 8,1 þLBPriu2 16,2 þcontour-hinge. Tested with skilled forgeries. Training Data set FAR (%) FRR (%) EER (%) FAR (s) FRR (s) 5 samples MCYT 2.21 26.43 11.90 1.66 8.27 GPDS-100 4.71 24.36 13.40 3.16 7.05 10 samples MCYT 6.54 8.69 7.08 2.28 6.38 GPDS-100 13.67 8.08 11.61 3.86 4.24 Table 16 Results using GLCM+contour-hinge. Tested with skilled forgeries. Training Data set FAR (%) FRR (%) EER (%) FAR (s) FRR (s) 5 samples MCYT 1.64 33.31 14.31 1.29 7.68 GPDS-100 4.40 28.64 15.11 3.04 6.91 10 samples MCYT 5.51 13.31 7.46 2.30 8.25 GPDS-100 11.90 11.52 11.76 4.42 5.72 Table 17 Results using LBPriu2 8,1 þLBPriu2 16,2 þGLCMþcontour-hinge. Tested with skilled forgeries. Training Data set FAR (%) FRR (%) EER (%) FAR (s) FRR (s) 5 samples MCYT 2.71 24.13 11.28 1.62 7.83 GPDS-100 4.79 23.09 12.88 2.74 6.68 10 samples MCYT 6.77 8.59 7.23 2.45 6.87 GPDS-100 13.13 7.46 11.04 3.86 3.91 J.F. Vargas et al. / Pattern Recognition 44 (2011) 375–385 384
  • 11. [27] H. Lv, W. Wang, C. Wang, Q. Zhuo, Off-line Chinese signature verification based on support vector machine, Pattern Recognition Letters 26 (2005) 2390–2399. [28] A. Mitra, P. Kumar, C. Ardil, Automatic authentification of handwritten documents via low density pixel measurements, International Journal of Computational Intelligence 2 (4) (2005) 219–223. [29] J. Vargas, M. Ferrer, C. Travieso, J. Alonso, Off-line signature verification based on high pressure polar distribution, in: ICFHR08, Montereal, 2008. [30] K. Franke, Stroke-morphology analysis using super-imposed writing move- ments, in: IWCF, 2008, pp. 204–217. [31] R.W. Conners, C.A. Harlow, A theoretical comparison of texture algorithms, IEEE Transactions on Pattern Analysis and Machine Intelligence 2 (3) (1980) 204–222. [32] R.M. Haralick, Statistical and structural approaches to texture, Proceedings of the IEEE 67 (5) (1979) 786–804. [33] D. He, L. Wang, J. Guibert, Texture feature extraction, Pattern Recognition Letters 6 (4) (1987) 269–273. [34] M. Trivedi, C. Harlow, R. Conners, S. Goh, Object detection based on gray level cooccurrence, Computer Vision, Graphics and Image Processing 28 (3) (1984) 199–219. [35] S. Marcel, Y. Rodriguez, G. Heusch, On the recent use of local binary patterns for face authentication, International Journal on Image and Video Processing, Special Issue on Facial Image Processing, IDIAP-RR 06-34, 2007. [36] S. Nikam, S. Agarwal, Texture and wavelet-based spoof fingerprint detection for fingerprint biometric systems, in: ICETET ’08: Proceedings of the 2008 First International Conference on Emerging Trends in Engineering and Technology, IEEE Computer Society, Washington, DC, USA, 2008, pp. 675–680, doi:http://dx.doi.org/10.1109/ICETET.2008.134. [37] T. Mäenpää, The local binary pattern approach to texture analysis—exten- sions and applications., Ph.D. thesis, Oulu University, Dissertation, Acta Univ. Oulu C 187, 78p+App., 2003, /http://herkules.oulu.fi/isbn9514270762/S. [38] T. Ojala, M. Pietikainen, T. Maenpaa, Multiresolution gray-scale and rotation invariant texture classification with local binary patterns, IEEE Transactions on Pattern Analysis and Machine Intelligence 24 (7) (2002) 971–987. [39] A.J. Mansfield, J.L. Wayman, Best Practices in Testing and Reporting Performance of Biometric Devices Version 2.01, National Physical Laboratory, San Jose State University NPL Report CMSC 14/02, August 2002. [40] J.A.K. Suykens, T.V. Gestel, J.D. Brabanter, B.D. Moor, J. Vandewalle, in: Least Squares Support Vector Machines, World Scientific Publishing Co. Pte. Ltd., 2002. [41] D. Bertolini, L. Oliveira, E. Justino, R. Sabourin, Reducing forgeries in writer- independent off-line signature verification through ensemble of classifiers, Pattern Recognition 43 (1) (2009) 387–396. [42] I. Güler, M. Meghdadi, A different approach to off-line handwritten signature verification using the optimal dynamic time warping algorithm, Digital Signal Processing 18 (6) (2008) 940–950. [43] F. Alonso-Fernandez, M.C. Fairhurst, J. Fierrez, J. Ortega-Garcia, Automatic measures for predicting performance in off-line signature, in: IEEE Proceed- ings of the International Conference on Image Processing, ICIP, vol. 1, 2007, pp. 369–372. [44] J. Wen, B. Fang, Y. Tang, T. Zhang, Model-based signature verification with rotation invariant features, Pattern Recognition 42 (7) (2009) 1458–1466. [45] A. Gilperez, F. Alonso-Fernandez, S. Pecharroman, J. Fierrez, J. Ortega- Garcia, Off-line signature verification using contour features, in: Proceedings of the International Conference on Frontiers in Handwriting Recognition, ICFHR, 2008. [46] M. Bulacu, Statistical pattern recognition for automatic writer identification and verification, Ph.D. thesis, Artificial Intelligence Institute, University of Groningen, The Netherlands, March 2007, /http://www.ai.rug.nl/bulacu/S. Jesus F. Vargas was born in Colombia in 1978. He received his B.Sc. degree in Electronic Engineering in 2001 and M.Sc. degree in Industrial Automation in 2003, both from Universidad Nacional de Colombia. Since 2004, he is an Auxiliar Professor at Universidad de Antioquia, Colombia. He is currently a PhD student at Technological Centre for Innovation in Communications (CeTIC, Universidad de Las Palmas de Gran Canaria, Spain). His research deals with offline signature verification. Carlos M. Travieso-Gonzalez received his M.Sc. degree in 1997 in Telecommunication Engineering at Polytechnic University of Catalonia (UPC), Spain. Besides, he received Ph.D. degree in 2002 at ULPGC-Spain. He is an Associate Professor from 2001 in ULPGC, teaching subjects on signal processing. His research lines are biometrics, classification system, environmental intelligence, and data mining. He is a reviewer in international journals and conferences. Besides, he is an Image Processing Technical IASTED Committee member. Jesus B. Alonso received his M.Sc. degree in 2001 in Telecommunication Engineering and Ph.D. degree in 2006, both from the Department of Computers and Systems at Universidad de Las Palmas de Gran Canaria (ULPGC), Spain. He is an Associate Professor at Universidad de Las Palmas de Gran Canaria from 2002. His interests include signal processing in biocomputing, nonlinear signal processing, recognition systems, and data mining. Miguel A. Ferrer was born in Spain in 1965. He received his M.Sc. degree in Telecommunications in 1988 and Ph.D. in 1994, both from the Universidad Politécnica de Madrid, Spain. He is an Associate Professor at Universidad de Las Palmas de Gran Canaria, where he has taught since 1990 and heads the Digital Signal Processing Group there. His research interests lie in the fields of biometrics and audio-quality evaluation. He is a member of the IEEE Carnahan Conference on Security Technology Advisory Committee. J.F. Vargas et al. / Pattern Recognition 44 (2011) 375–385 385