Generation of Duplicated Off-line Signature Images for Verification

Generation of Duplicated Off-line Signature Images for Verification
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TPAMI.2016.2560810, IEEE
Transactions on Pattern Analysis and Machine Intelligence
JOURNAL OF LATEX CLASS FILES, VOL. 13, NO. 9, SEPTEMBER 2014
In Press, April 2016
1
Generation of Duplicated Off-line Signature
Images for Verification Systems
Moises Diaz, Miguel A. Ferrer, George S. Eskander and Robert Sabourin Member, IEEE
Abstract—Biometric researchers have historically seen signature duplication as a procedure relevant to improving the performance of
automatic signature verifiers. Different approaches have been proposed to duplicate dynamic signatures based on the heuristic affine
transformation, nonlinear distortion and the kinematic model of the motor system. The literature on static signature duplication is limited
and as far as we know based on heuristic affine transforms and does not seem to consider the recent advances in human behavior
modeling of neuroscience. This paper tries to fill this gap by proposing a cognitive inspired algorithm to duplicate off-line signatures.
The algorithm is based on a set of nonlinear and linear transformations which simulate the human spatial cognitive map and motor
system intra-personal variability during the signing process. The duplicator is evaluated by increasing artificially a training sequence
and verifying that the performance of four state-of-the-art off-line signature classifiers using two publicly databases have been
improved on average as if we had collected three more real signatures.
Index Terms—biometric signature identification, signature synthesis, off-line signature verification, performance evaluation, off-line
signature recognition, equivalence theory.
F
1
I NTRODUCTION
F
OR centuries, the handwritten signature has been accepted world-wide for the purpose of authentication.
Classical applications include the legal validation of documents such as contracts, last wills or testaments, corporative tax statements, financial transfers and so on. It leads
forensic scientists, graphologists, neurological practitioners
and therapists among others to be interested in the creation
and validity of handwritten signatures. This interest is manifested in many publications and surveys [1], [2], [3], [4],
[5], [6], [7], [8] published in the literature during previous
decades. However, modeling its intra-personal variability
is still an open challenge which has caught the attention of
researchers on pattern recognition and machine intelligence.
It is important to understand intra-personal variability of
the signatures of a signer. Its modeling allows the widening
of the distinction between genuine and non-genuine signatures. The generation of duplicated specimens with realistic
appearance helps in gaining a better understanding of signature execution from several neuroscientific perspectives.
This also supports coherent decision making in psychology
and forensic science and assists in optimizing speed and
performance for indexing purposes.
This paper is focused on modeling intra-personal variability for duplicating static signatures. In this context,
duplicating a signature means generating artificially a new
specimen from a real or several real signatures. Among its
advantages, signature duplication can improve the training
•
•
M. Diaz and M. A. Ferrer are with the Instituto Universitario para el
Desarrollo Tecnológico y la Innovación en Comunicaciones, Universidad
de Las Palmas de Gran Canaria, Las Palmas 35017, Spain.
E-mail: {mdiaz, mferrer}@idetic.eu
G. S. Eskander and R. Sabourin are with Laboratoire d’imagerie, de Vision
et d’Intelligence Artificielle, Ecole de Technologie Supérieure, Université
du Québec, 1100, rue Notre-Dame Ouest, Room A-3600, Montréal, QC
H3C 1K3, Canada.
E-mail: [email protected], [email protected]
of ASV systems, allows the carrying out of statistically
meaningful evaluations, enlarges the number of signatures
in databases, can match the baseline performances for real
signatures, which are often difficult to obtain, and can
improve the performances of existing automatic signature
verifiers.
According to the literature, there are three methods of
signature duplication: i) the creation of dynamic (on-line)
signatures by using real on-line samples (On-2-On); ii) the
creation of static (off-line) signatures by using dynamic
specimens (On-2-Off) and iii) the production of signature
images (off-line) from static signatures (Off-2-Off). Note that
on-line signature generation from off-line specimens (Off-2On) remains to the best of our knowledge an open issue.
Most of the recent advances in modeling intra-personal
variability are focused on on-line signatures. For this category of signature capture we can find how, by applying
random and affine deformations, it is possible to improve
the performance of an HMM-based classifier [12]. In [13]
another method is studied for increasing the training set
on the basis of a clonal selection of the enrolled signatures
without modifying either the diversity of the overall set or
the sparse feature distribution. Also, in Diaz et al. [14], the
kinematic theory of rapid movement is applied to enlarge
the enrolled signature data set. Furthermore, the resultant
set has been used for testing purposes in [9].
There are other proposals in the literature focused on the
generation of signature images from on-line signatures [11],
[15], [16], [17], [19]. The common tendency is to apply
different methods to dynamic signatures since these record
the kinematic and the timing order in which the traces are
registered. Once a new trajectory is obtained, the samples of
the new specimen are interpolated in order to create new images. Then, an off-line automatic classifier is used to assess
the performance improvement. Parallel to this approach, a
method of generating enhanced synthetic signatures images
0162-8828 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TPAMI.2016.2560810, IEEE
Transactions on Pattern Analysis and Machine Intelligence
JOURNAL OF LATEX CLASS FILES, VOL. 13, NO. 9, SEPTEMBER 2014
2
TABLE 1: State-of-the-art on duplicated signature generation
Authors
Methods
Conversion
Seed
Target
Munich et al., 2003 [9]
Frias et al.1 , 2006 [10]
Rabasse et al., 2008 [11]
Galbally et al., 2009 [12]
Affine-scale/geometrical
transformations
On-2-On
Off-2-Off
On-2-Off
On-2-On
>1 Sign.
1 Sign.
2 Sign.
1 Sign.
Statistically meaningful evaluation
Enlarge database
Approaching the baseline performance
Improve the performance
Song et al., 2014 [13]
Clonal Selection Algorithm
On-2-On
>1 Sign.
Improve the performance
Diaz et al., 2015 [14]
Kinematic Theory
On-2-On
1 Sign.
Improve the performance
Ferrer et al., 2013 [15]
Guest et al., 2014 [16]
Diaz-Cabrera et al., 2014 [17]
Galbally et al., 2015 [18]
Interpolation
methods
On-2-Off
On-2-Off
On-2-Off
On-2-Off
1 Sign.
1 Sign.
1 Sign.
1 Sign.
Approaching the baseline performance
Approaching the baseline performance
Approaching the baseline performance
Approaching the baseline performance
Diaz-Cabrera et al.2 , 2014 [19]
Cognitive Inspired Model
On-2-Off
1 Sign.
Improve the performance
This method, 2015
Cognitive Inspired Model
Off-2-Off
1 Sign.
Improve the performance
Open Issue, ?
?
Off-2-On3
1 Sign.
?
(On-2-On = From real on-line to duplicated on-line signature; On-2-Off = From real on-line to duplicated off-line signature;
Off-2-Off = From real off-line to duplicated off-line signature; Off-2-On = From real off-line to duplicated on-line signature)
1 This paper is focused on two classifiers. However, how much the duplicates improve the performance has not been as fully studied
as the effects of a higher number of signatures in the database.
2 Because the real signatures were on-line, temporal signature execution was understood.
3 To the best of the authors’ knowledge, this problem has not yet been examined in the scientific literature.
has been formulated using a novel architecture to improve
the performance of dynamic verifiers [18].
In our review of previous work, we have found little on
duplication from off-line to off-line signatures. One example
is in [10] where an off-line signature dataset composed of 6
genuine specimens per user and 38 signers is enlarged by
applying affine transformations to the original signatures.
Since the database contained only genuine signatures, this
study was focused only on recognition. The authors did not
include the deliberate forgery test. Although the authors
enlarged the training set, the paper scarcely addressed either
how the duplicated signatures were constructed or gave
reference to the cognitive signing procedure.
In order to locate in the current literature the work we
report here, Table 1 summarizes schematically the stateof-the-art in duplicated signature generation. This analysis
reveals the need for more research in duplicating off-line
signatures from off-line real signatures.
As well as covering geometrical and affine image deformations, the work reported in this paper is addressed
at filling the gap between heuristic methods used for offline signature duplication and methods for intra-personal
variability modeling. The techniques we develop are based
on human behavior, as examined by neuroscience. Specifically, a duplication procedure is proposed on the basis
of modeling neuromotor equivalence theory [20], which
divides human action into effector dependent and effector
independent parts of the signing procedure. Note that we
do not pretend to model human behavior as neuroscience
does: we simply use ideas from neuroscience to generate
signature duplicates. Our results are encouraging.
The realism of the intra-personal variability model is
evaluated by increasing a training sequence with duplicates
and ascertaining the improvement in performance of four
different state-of-the-art generative classifiers. So as to consider as many aspects of the variability as possible, we have
chosen verifiers which are based on different features and
classifiers. Additionally, we have used two different public
datasets. The improved performance after training with the
enlarged set is discussed as well as the complementary
information contained in data produced by the cognitive
inspired duplication algorithm.
The reminder of this paper has the following organization. Section 2 surveys the cognitive ideas used to design
our method which is the algorithm detailed in Section 3.
Section 4 describes the method used to evaluate the duplicator, while section 5 and 6 present the results and conclusions
respectively.
2
C OGNITIVE I NSPIRED M ODEL
In this section, the equivalence model is introduced along
with our duplication model. Around the middle of the last
century, Lashley [21], then Hebb [22] and later Bernstein [23]
formulated the motor equivalence theory. This was studied
from a musculo-skeletal viewpoint by focusing on central
nervous system (CNS) activity used to control posture and
movement. Such a theory deals with the ability of the
relevant effectors to generate arm and wrist movement.
As in [20], [24], [25], the generation of the handwritten
signature can be divided into two main effectors which are
active in the brain: the effector dependent cognitive level
and effector independent motor level. It is well known that
signing requires highly complex, fine motor control and
cognitive skills to generate the signature. However, once
learnt, signing soon becomes an automatic and effortless
process. In [26] such ability is attributed to an inverse
model of the subject being controlled by motor learning.
Additionally, previous to its execution, the movement is designed by a determined spatial position of each handwritten
component and its relative position. In practice, this might
be considered as an action plan that links target points. After
Hafting et al. [27], the target points are considered to be
nodes belonging to an hexagonal grid.
0162-8828 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TPAMI.2016.2560810, IEEE
Transactions on Pattern Analysis and Machine Intelligence
JOURNAL OF LATEX CLASS FILES, VOL. 13, NO. 9, SEPTEMBER 2014
Although both effectors are fairly stable, they have a certain degree of variability besides being affected by external
inputs and internal, psychological conditions. In fact, under
pressure, an individual usually needs to recall the shape of
his/her signature before and during signing, and this leads
to producing specimens with unusual variability.
On the cognitive level variability, it is proposed that
human spatial orientation is represented by a hexagonal
grid spanned in the space occupied. Because the grid is
not rigid, each of its nodes change slightly every time it
is traversed [27] in conducting a signature. In the case of a
signature, this deformation depends on the stroke direction
and speed at every grid node, i.e. the velocity vector. This
variability can be altered as a consequence of psychiatric
diseases and with aging.
On the motor level variability, it may be considered
as a general motor command that a signer tries to follow
through the cognitive grid map. Driven by the muscular
path reaction, this effect aims the signature’s ballistic trajectory at a different position among its repetitions according
to individual muscle activity. It can be also interpreted
from the kinematic theory point of view [28] as various
agonist or antagonist strokes increasing or decreasing their
amplitudes.
The novelty of our paper relies on the design of a
signature duplicator, inspired by this theory. The intention
is to provide a better human-like intra-personal variability
than other duplicators [11], [15], [16], [17], [18], [19]. On
the one hand, the cognitive level variability is approached
inspired on the grid deformations. Among the different
grid deformation patterns in the literature, we selected
the sinusoidal one which has been successfully applied in
CAPTCHA generation [29]. While this deformation enlarges
some strokes, others are shortened. When the concept of
sinusoidal deformation is applied to an hexagonal grid, the
result is a new grid similar to that represented at Fig. 1.
This concept enables the generation of intra-personal variability, which involves all signature components, without
reconstructing the trajectory. This is also known as intracomponent variability.
On the other hand, it could be said that the effector
independent deals mainly with the dynamic aspects of
the handwriting. Certainly, to estimate the pseudo-dynamic
information needed to make the handwritten trajectories is a
classic challenge [30]. Different approaches have been used
to address this problem, e.g. the application of specialized
pens [31], [32], recovering the skeleton of the handwriting
by thinning the trajectories with 1-pixel width images [33],
[34], calculating the contour to study the critical parts, such
as the loops [35], [36] or even estimating the stroke order
by using hierarchical reconstruction [37]. Nevertheless, the
problem is still an open challenge in handwriting.
Because of both the cognitive grid deformation and
different values for the motor arm inertia, caused by the
pose, wearing of different clothes, jewelry, etc., changes are
produced in the relation between different non connected
components. These are the positions of the pen before and
after a pen-up. As the images do not contain the penup trajectories, this kind of variability is dealt with by
labeling the different unconnected signature components in
the image. This applies to all individuals and is a way of
3
⇒
Fig. 1: Hexagonal cell unit distortion with sinusoidal transformation. Original grid on the left and distorted one on the
right.
taking into account the pen-up variability from sample to
sample. This further variability in our model is the so called
inter-component variability.
A certain degree of inclination occurs in the repetitions
of the signatures. Often this is related to the pose, the way
the paper is located or how the pen is held, etc. Thus, the
final stage of the duplicator introduces a skew modification
to the signature.
We should make it clear that we are not modeling brain
function or motor activity. We use our interpretation of
motor equivalence to inspire the design of the algorithm
and thus to improve the intra-personal variability models.
3
G ENERATION OF D UPLICATED O FF - LINE S IGNA F ROM R EAL O FF - LINE S IGNATURES
TURES
This section describes the duplicator algorithm steps,
which are listed as follows: signature segmentation,
intra-component variability, component labeling, intercomponent variability and signature inclination. A general
overview of this duplicator is depicted in Fig. 2, and its main
steps are formalized in Algorithm 1.
Algorithm 1 The cognitive inspired duplicator
Input: Iin (x, y), a grayscale original signature image.
Output: Idup (x, y), a grayscale duplicated signature image.
1:
. Signature Segmentation.
2: IP (x, y) ← SigSeg{Iin (x, y)}
3:
. Intra-Component Variability
min
max
min
4: IS (xs , y s ) ← IntraCV{IP (x, y), [αA
, αA
, αP
,...
max
min
max
αP , αS , αS ]}
5:
. Component Labeling
s s
6: {Ii (x, y)}L
←
I
(x
,
y
)
S
i=1
7:
. Inter-Component Variability
1
1
1
8: Idis (x, y) ← InterCV{{Ii (x, y)}L
i=1 , [(ξx , σx , µx ), . . .
2
2
2
3
3
3
1
1
1
2
2
(ξx , σx , µx ), (ξx , σx , µx ), (ξy , σy , µy ), (ξy , σy , µ2y ), . . .
(ξy3 , σy3 , µ3y ), κ1 , κ2 , γT , ψ]}
9:
. Signature Inclination
10: Idup (x, y) ← SigInc{Idis (x, y), [ξS , σS , µS ]}
3.1
Signature Segmentation
Let I(x, y) be the 256-level gray scale signature image input
to the duplicator. The segmentation process is performed
to remove the background from the scanned images. A
simple thresholding operation is applied [38] to obtain a
binary image Ibw (x, y). Because this image still contains
noise, careful processing is carried out to remove it [39]. The
0162-8828 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TPAMI.2016.2560810, IEEE
Transactions on Pattern Analysis and Machine Intelligence
JOURNAL OF LATEX CLASS FILES, VOL. 13, NO. 9, SEPTEMBER 2014
4
Real Off-line Signature: Iin
Duplicated Off-line Signature: Idup
C OGNITIVE I NSPIRED D UPLICATION M ODEL
Signature
Intra-Component
Segmentation
Variability
IP (x, y)
IS (xs , y s )
Component
Inter-Component
Signature
Labeling
Variability
Inclination
Idis (x, y)
Idup (x, y)
{Ii (x, y)}L
i=1
6
6
min
max
min
max
min
max
{αA
, αA
, αP
, αP
, αS
, αS
}
6
{ξS , σS , µS }
{(ξx1 , σx1 , µ1x ), (ξx2 , σx2 , µ2x ), (ξx3 , σx3 , µ3x ),
1
1
(ξy , σy , µ1y ), (ξy2 , σy2 , µ2y ), (ξy3 , σy3 , µ3y ), κ1 , κ2 , γT , ψ}
Fig. 2: General overview of the off-line signature duplicator.
resulting image is used as a mask to remove the background
and segment the original inked signature. Next, the canvas
size is processed by cropping the white borders of the image
thus obtaining the preprocessed image IP (x, y) according to
Algorithm 2.
Algorithm 2 Border Cropped Procedure
Function: I(x̂, ŷ) ← BorderCrop{I(x, y)}
1: Ibw (x, y) ← im2bw{I(x, y)}
. Binary image
2: h ← sum{Ibw (x, y)}
. horizontal pixel distribution
3: i∗ ← find{h > 0}
4: Ibw (x, ŷ) ← Ibw (x, i∗ )
5: I(x, ŷ) ← I(x, i∗ )
6: v ← sum{Ibw (x, ŷ)T }
. vertical pixel distribution
7: j ∗ ← find{v > 0}
8: I(x̂, ŷ) ← I(j ∗ , ŷ)
3.2
Intra-component variability
The intra-component variability is introduced by a piecewise sine wave function. Let IP (x, y) be a segmented gray
scale signature whose canvas size is defined by M columns
and N rows. A sinusoidal transformation is applied to the
rows and columns of the image according to equation (1) to
obtain IS (xs , y s ).
xs = x + Ax sin (ωx x + ϕx )
y s = y + Ay sin (ωy y + ϕy )
(1)
Where the parameters of the sine wave are defined as
follows: i) The sinusoidal amplitudes are calculated for both
coordinates: Ax = M/αA and Ay = N/αA , where αA is a
min
max
factor which follows a uniform distribution U(αA
, αA
).
ii) The angular frequencies are obtained through the oscillation period ωx = 2π/τx and ωy = 2π/τy . iii) A certain
variability is added to the period similar to the amplitude:
τx = M/αP and τy = N/αP . In the same way, the pamin
max
rameter αP follows a uniform distribution U(αP
, αP
).
iv) Finally, the phase is defined by: ϕ = 2παS , where αS
min
follows a uniform distribution U(αS
, αSmax ). Accordingly,
we compute ϕx and ϕy .
Algorithm 3 calculates this intra-component variability
distortion.
Algorithm 3 Intra-component variability
min
max
Function: IS (xs , y s ) ← IntraCV{IP (x, y), [αA
, αA
,...
min
max
αP
, αP
, αSmin , αSmax ]}
1:
. Let {M, N } be the rows and columns of IP (x, y)
2:
. Let {P, Q} be the rows and columns of IS (xs , y s )
3:
. Random parameter selection
min
max
4: αA ← U(αA
, αA
); Ax ← M/αA
min
max
5: αA ← U(αA
, αA
); Ay ← N/αA
min
max
6: αP ← U(αP , αP ); τx ← M/αP ; ωx ← 2π/τx
min
max
7: αP ← U(αP
, αP
); τy ← N/αP ; ωy ← 2π/τy
min
max
8: αS ← U(αS , αS ); ϕx ← 2παS
min
9: αS ← U(αS
, αSmax ); ϕy ← 2παS
10:
. Building the distorted image
11: xs ← x + Ax sin (ωx x + ϕx )
12: y s ← y + Ay sin (ωy y + ϕy )
13: for i ← 0 to i ← (P − 1) do
14:
for j ← 0 to j ← (Q − 1) do
15:
if (0 ≤ xi ≤ P − 1) ∧ (0 ≤ yi ≤ Q − 1) then
16:
IS (xsi , yjs ) ← IP (xi , yj )
17:
else
18:
IS (xsi , yjs ) ← 1
19:
end if
20:
end for
21: end for
Visual details of this transformation on the word da
are shown in Fig. 3. Several sinusoidal transformation parameters distort the inked image producing different duplications. The figure illustrates how the intra-component
relationship is modified.
0162-8828 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TPAMI.2016.2560810, IEEE
Transactions on Pattern Analysis and Machine Intelligence
JOURNAL OF LATEX CLASS FILES, VOL. 13, NO. 9, SEPTEMBER 2014
5
(a)
(b)
(c)
(d)
(e)
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(f)
(g)
(h)
(i)
(j)
Fig. 3: Visual examples of intra-component variability in
different repetitions of a scanned image. Note how loops
in letters “d” and “a” are opened and closed without losing
their original continuity.
3.3
Component Labeling
In this step each connected area is separately labeled [40]
in the binary image. Starting with the first detected pixel,
the algorithm searches for all 8-connected areas. Using the
information in each label, an enclosing box is built for each
isolated, inked component. Fig. 4 shows different detected
components in different handwriting signatures. In some
examples it is possible to see that the flourish is merged
with the text. In such cases, a large area of the signature
is detected as one component (see Fig. 4-c or 4-d, among
others). This stage allows the generation of a set of L individual images {Ii (x, y)}L
i=1 from each labeled component of
the image IS (xs , y s ).
3.4
Inter-component variability
The inter-component variability is dealt with by applying
different horizontal and vertical independent displacements
to each labeled component. In an ideal case, each labeled
component represents an inked trace from a pen tip which
touches the paper until it is lifted up. This section assumes
that the larger the component ratio, the more rapidly the
component was drawn and that eventually it will be subjected to more inter-component variability. Fig. 4 reveals that
many components are not correctly labeled mainly because
their flourishes are drawn over many letters. Although this
could introduce certain errors in classification in signatures
with a prominent flourish, this classification has been used
in the algorithm for convenience [19]. Additionally, for signatures with small or without a flourish at all, this stage still
introduces personal variability to the duplicates (see Fig. 4-f
and 4-j).
Let us define a sequence of labeled images {Ii (x, y)}L
i=1
where each image represents each detected component. The
new image Idis with the displaced component is computed
as:
Idis (x, y) =
L
X
Ii (x + δxi , y + δyi )
(2)
i=1
Three kinds of section were identified for each of the
horizontal and vertical coordinates, which are delimited by
κ. Thus, the displacement of each component (δx , δy ) is
worked out as follows:
Fig. 4: Visual examples of labeling in handwriting signatures. The non-connected components are highlighted by
assigning different colors to each signature component.

 gevrnd{ξx1 , σx1 , µ1x }
gevrnd{ξx2 , σx2 , µ2x }
δx =

gevrnd{ξx3 , σx3 , µ3x }
if Γi < κ1
if κ1 ≤ Γi < κ2
if Γi ≥ κ2
(3)

1
1
1
 gevrnd{ξy , σy , µy }
2
2
gevrnd{ξy , σy , µ2y }
δy =

gevrnd{ξy3 , σy3 , µ3y }
if Γi < κ1
if κ1 ≤ Γi < κ2
if Γi ≥ κ2
(4)
To categorize each component, a ratio is calculated per
component: Γi = γi /γT . This ratio denotes the relationship
between the number of pixels in each individual component
γi and the total number of inked pixels γT in IS (xs , y s ).
These displacements are obtained by pseudo-random values
drawn from a Generalized Extreme Value (GEV) distribution [41]. In the algorithms illustrated, we have indicated
its use with a function named gevrnd whose inputs are the
parameters defined by such a distribution.
This kind of distribution is traditionally used for modeling extremes of natural phenomena such as waves, winds,
temperatures, earthquakes, floods, etc. Additionally, its use
has been previously studied in handwriting signature modeling [42], and offers a higher flexibility with respect to
other distributions such as the Gaussian or Normal distribution. The GEV distribution is parameterized according to
{ξik , σik , µki } which represent the location, scale and shape
distribution parameters respectively, where i represents the
horizontal or vertical coordinates and k the different sections. More formally, let x be an independent variable. Its
density function is:
1
−1
1
x − µ − ξ −1 −[1+ξ( x−µ
ξ
σ )]
f (x; µ, σ, ξ) =
1+ξ
e
σ
σ
(5)
After the displacement, the summation of individual
images over Idis could overlap certain pixels of two or
more individual images. In real handwriting, a similar effect
of ink summation is noted in crossed over traces. If there
are such crossovers, to obtain the gray scale values in the
relevant pixels, two steps are taken: Step 1: One of the
individual components is randomly chosen to be the first
drawn trace and the second is summed to make the new
image. Step 2: For only the crossed traces, a simple blending
0162-8828 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TPAMI.2016.2560810, IEEE
Transactions on Pattern Analysis and Machine Intelligence
JOURNAL OF LATEX CLASS FILES, VOL. 13, NO. 9, SEPTEMBER 2014
(a) Original Scanned Image Iin (x, y)
(d) Component labeling {Ii (x, y)}L
i=1
6
(b) Segmentation IP (x, y)
(e)
Displacement
Idis (x, y)
between
(c) Sinusoidal Transformation IS (xs , y s )
components (f) Signature inclination and new duplicated offline signature Idup (x, y)
Fig. 5: Cognitive inspired model to create a duplicated off-line signature using only one real off-line specimen.
factor ψ is worked out to mimic the effect of having two
overlapped traces.
If any pixel is not overlapped, step 2 is omitted during
the summation of the two relevant individual Ii images. We
formalized this stage using simple concepts in set theory,
along with the inter-component variability stage in Algorithm 4. Finally, an extra stage to introduce variability is
generated to the whole image, it is explain in the next
subsection.
signers (inter-personal variability). Here, we focus on the
signature inclination of each signer and how it varies from
the average signature inclination (intra-personal variability).
The procedure to measure this intra-personal distribution
is as follows: Let N be a set of genuine signatures from a
signer. The skew with respect to the horizontal is measured
for each specimen ρi . Then, the user’s average inclination is
computed to finally obtain the difference (ρ̂i ) between each
individual value and its average.
!
N
Algorithm 4 Inter-component variability
1 X
ρi − ρi
(6)
ρ̂i =
1
1
1
N i=1
Function: Idis (x, y) ← InterCV{{Ii (x, y)}L
i=1 , [(ξx , σx , µx ), . . .
(ξx2 , σx2 , µ2x ), (ξx3 , σx3 , µ3x ), (ξy1 , σy1 , µ1y ), (ξy2 , σy2 , µ2y ), . . .
When the inclination dispersion is calculated for N sign(ξy3 , σy3 , µ3y ), κ1 , κ2 , γT , ψ]}
ers, a global Probability Density Function (pdf) is estimated
1: Idis (x, y) ← I1 (x, y)
by the histogram non-parametric method. Accordingly, a
2: for i ← 2 to i ← L do
Generalized Extreme Value (GEV) distribution has been
3:
γi
. The total inked pixels of the component i
used to approximate the measured pdf with the parameters
4:
Γi ← γi /γT
[ξS , σS , µS ].
5:
if Γi < κ1 then
1
1
1
This stage is summarized in Algorithm 5. Due to the fact
δx ← gevrnd{ξx , σx , µx }
6:
1
1
1
that
the skew of the signature introduces personal variabilδy ← gevrnd{ξy , σy , µy }
ity in the set of genuine signatures, each image has been
7:
else
rotated a certain angle calculated using a GEV distribution.
8:
if κ1 ≤ Γi < κ2 then
Again, the image borders are cropped to remove the white
δx ← gevrnd{ξx2 , σx2 , µ2x }
9:
pixels at the edges of the signature. At the end of this stage,
δy ← gevrnd{ξy2 , σy2 , µ2y }
10:
else
a duplicated signature Idup (x, y) is obtained.
3
3
3
δx ← gevrnd{ξx , σx , µx }
11:
δy ← gevrnd{ξy3 , σy3 , µ3y }
Algorithm 5 Signature Inclination Algorithm
12:
end if
Function: Idup (x, y) ← SigInc{Idis (x, y), [ξS , σS , µS ]}
13:
end if
1: M ← gevrnd{ξS , σS , µS }
14:
A ← Idis (x, y); B ← Ii (x + δx , y + δy );
2: Irot (x, y) ← imrotate{Idis (x, y), M }
15:
C = (A \ B) ∪ (B \ A)
. Symmetric difference
3: Idup (x, y) ← BorderCrop{Irot (x, y)}
16:
D =A∩B
. Intersection
17:
Dbw ← im2bw{D}
18:
E = A · Dbw · ψ + B
19:
Idis (x, y) ← C ∪ E
20: end for
· Dbw · (1 − ψ)
. Union
As a final example, Fig. 5 illustrates the effect of each
stage in an off-line signature.
4
3.5
Signature inclination modification
The signatures can be written in an ascendant, descendent
or longitudinal manner. This inclination is the so called
skew. Ref. [42] studies the skew of signatures from different
E VALUATION M ETHODOLOGY
One way of evaluating whether the duplicator produces
signatures with human-like variability, is to process the
training samples in an automatic signature verifier (ASV)
and look for performance improvement. In the training
stage, the ASV tries to model the signer’s signature and
0162-8828 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TPAMI.2016.2560810, IEEE
Transactions on Pattern Analysis and Machine Intelligence
JOURNAL OF LATEX CLASS FILES, VOL. 13, NO. 9, SEPTEMBER 2014
its variability. The accuracy of this model is evaluated in
the testing phase. The more information provided to the
ASV, the better its performance will be. Therefore, an ASV
trained with the original signatures plus their duplicates
will improve the performance in the testing phase if, and
only if, the duplicator generates duplicated signature with
similar variability to the genuine users. The performance
comparison will be analyzed and it is expected to be much
more competitive by using the proposed novel scheme.
Fig. 6 summarizes the evaluation methodology, highlighting
the performance comparison between the cognitive-inspired
duplication model and the traditional one.
This procedure is conducted with two off-line signature
databases, namely the GPDS-300 [43] and MCYT-75 [44]
along with four published state-of-the-art-ASVs so as to
avoid biased results and to obtain more consistent and
general conclusions. Specifically, System A [45] works with
geometrical features and Hidden Markov Models (HMM);
System B [46] employs the Boosting Feature Selection (BFS)
approach and single grid-based features; System C [39] is
a Support Vector Matching (SVM) classifier with texture
features; and System D [47], which is a third-party system,
uses pose-orientated grid features and an SVM.
Signature verification systems are designed to be effective in two specific tests: the random forgery test and the
deliberate forgery test. The random forgery test is applied
in the situation in which an impostor, without previous
knowledge of a specific signature, tries to verify the identity
of a signer by using his own genuine signature. The random
forgery test is a typical test used in access control and commercial transactions. Conversely, the deliberate forgery test
simulates the case where an impostor learns the signature of
a signer and tries to reproduce it with a similar intra-class
variability. This test is the most relevant in signature verification for its impact in forensic applications in signature
forgery detection.
For the training, different strategies have been carried
out. In this work, the training set consists of the first
two, the first five and first eight real genuine signatures.
As such, for each strategy, we have added the duplicated
signatures to the training. For instances, let 5 be the number
of the first real signatures enrolled in the system. We have
trained with these 5 real signatures plus 0, 1, 5, 10 and 20
duplicated signatures, obtained from each individual real
enrolled specimen.
To test the GPDS-300 database, we used from the ninth
to the twenty-third genuine signature for all cases, i.e. 15
signatures per user in total. So as to do a fair comparison, we
used the identical test in all experiments. For the MCYT, the
testing set is composed of 7 signatures: from the ninth to the
fifteenth. This way, the false rejection rate (FRR) is calculated
with 15·300 = 4500 scores for GPDS-300 and 7·75=525 scores
for MCYT-75.
For the random forgery test, we have selected the first
testing signature of other users, i.e. the signature number
nine according to the database nomenclature. We compute
the false acceptance rate (FAR) with 1·(300-1)·300 = 89700
scores for GPDS-300 and 1·(75-1)·75 = 5550 scores for MCYT75. For the deliberate forgery test, all deliberately forged signatures in the databases were used. Therefore, we compute
the false acceptation rate (FAR) using 30·300 = 9000 scores
7
Traditional scheme
Real off-line
T RAINING Set
Novel
scheme
Real off-line
SET- UP Set
C OGNITIVE I NSPIRED
D UPLICATION
M ODEL
Generic
Performance A
Off-line Sign.
Classifier
Real off-line
T EST Set
Generic
Performance B
Off-line Sign.
Classifier
Fig. 6: Procedure to evaluate the human-like variability of
the duplicator. Following the novel scheme branch, performance in B is expected to be better than performance in A.
for GPDS-300 and 15·75 = 1125 scores for MCYT-75. We note
that the deliberately forged signatures are never used for
training.
To study the efficiency of the duplicated signatures in
these biometric verification systems, two classical types of
errors were evaluated: Type I error or FRR to measure
the rejection of authentic signatures and type II or FAR,
which evaluates the acceptance of a forgery. To assess the
systems with a common metric, the results are given in
terms of Equal Error Rate (EER) as well as Area Under
Curve (AUC) since these represent the operative point when
the error type I and II are coincident. In this work, we
compensate for unbalanced classes, by using the method
in [48], [49] which estimates the threshold for each class separately. Finally, in order to show how the system thresholds
are varied according to the performance of the classifiers,
we have illustrated some receiver operating characteristic
(ROC) curves in graphical form.
5
5.1
R ESULTS AND DISCUSSION
Cognitive inspired duplicator set up
Setting up the duplicator algorithms requires optimization
of their parameters. Therefore a development dataset and
ASV are required. To avoid adjusting the data, this is conducted on a subset of one of the two training databases and
with one of the four ASVs. Specifically, the development set
is composed of the first 5 samples of the first 150 users of
the GPDS-300. This is used to train the System C which is
based on texture features followed by applying a Support
Vector Machine ASV.
The parameters were heuristically optimized in a trial
and error procedure in three steps. Firstly, initial values were
given to each parameter in order to produce the desired
distortion effect; secondly, a coarse tuning of the parameters
by a perceptual evaluation of the results was conducted;
and thirdly, a fine-tuning of the parameters to produce
the best performance with the above mentioned ASV was
undertaken.
On intra-component variability, incorrect selection of a
parameter could produce an unnatural handwriting image.
0162-8828 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TPAMI.2016.2560810, IEEE
Transactions on Pattern Analysis and Machine Intelligence
JOURNAL OF LATEX CLASS FILES, VOL. 13, NO. 9, SEPTEMBER 2014
8
TABLE 2: Configuration of the duplicator parameters.
X
×
×
X
(a) Intra-Component
X
X
×
Intra-Component
Variability
×
(b) Inter-Component
Fig. 7: At the top, from left to right the values used to illustrate the intra-component variability for the four signatures
were respectively: αA = 30, 5, 5, 5; αP = 0.8, 0.8, 0.5, 0.5
and αS = 0, 0, 0, 0.5. Similarly, at the bottom, from left
to right, the values used to defined the inter-component
variability were respectively, σx1 = 10, 0, 40, 40 and σy1 =
0, 4, 4, 16. Symbols X and × indicate natural and unnatural
writing styles respectively.
Because αA is inversely proportional to the amplitude,
while large values do not produce any effect because the
amplitude results are close to 0, lower values would produce
rectangular traces if αP and αS were 1 and 0, respectively. In
the case that the amplitude is near to one, and the sinusoidal
phase is null, large values in αP create highly sinusoidal
images that are not human-like. Nevertheless a combination
of them in the correct range produces acceptable humanlike results. As example of writing style modification is
illustrated in Fig. 7-a. The correct parameter combinations
are highlighted with a tick whereas unnatural looking signatures are shown by a cross.
On the inter-component variability, too much deformation displays strange effects in the text. Specifically, an extra
large value for δx can change the order of the text: for
instance, the original name “Peter” could be converted to
“Peert”. Thus, an excessive deformation in vertical displacement δy yields an unnatural order of the letters in the
vertical direction. These effects are observed in Fig. 7-b.
Also, the inter-component variability becomes too sensitive
to the size of the image. Therefore, it became necessary to fix
a location parameter ξ . Because the natural variation in vertical displacement is usually positive, ξ was fixed at -0.5 in
order to bring the center of mass of the distribution back to
the lower position. Because the scale σ controls the opening
of the range of possible values, it is incremented according
to the kind of sector it is in, thus giving more variability to
most inked components. Finally, the parameter µ moves the
distribution without changing the shape. Experimentally,
this parameter is related to the scale for natural writing and
to the relationship between sections. The sections were then
divided into equal ranges, κ1 and κ2 , and ψ was visually
fixed at 0.8 to account for the natural effect of the ink.
Table 2 shows the parameters and their range used in
this work. Finally, we can visualize the natural writing style
obtained through our model in Fig. 8. In this case a set of
five possible duplicates are generated from only one original
signature.
The computational time needed to generate a duplicated
signature from a real image is on average less than 4 s
Inter-Component
Variability
Signature
Inclination
αmin
=5
A
αmax
= 30
A
αmin
= 0.5
P
αmax
=1
P
αmin
=0
S
αmax
=1
S
1
1
1
{ξx
, σx
, µ1x } = {−0.5, 20, 2 · σx
}
2
2
2
1
1
{ξx , σx , µx } = {−0.5, 1.4 · σx , 2 · 1.4 · σx
}
3
3
1
1
{ξx
, σx
, µ3x } = {−0.5, 1.8 · σx
, 2 · 1.8 · σx
}
{ξy1 , σy1 , µ1y } = {−0.5, 8, σy1 }
{ξy2 , σy2 , µ2y } = {−0.5, 1.2 · σy1 , 1.2 · σy1 }
{ξy3 , σy3 , µ3y } = {−0.5, 1.5 · σy1 , 1.5 · σy1 }
κ1 = 0.33
κ2 = 0.67
ψ = 0.8
ξS = −0.19
σS = 3.28
µS = −1.30
although this depends on the size of input image. This
time is calculated using the duplicator on a regular PC
Ubuntu 12.04 LTS with with Intel®Core™i7-3770 CPU @
3.40 GHz with 15.6 Gb RAM memory. The signature duplicator has been made publicly available for research purposes
at www.gpds.ulpgc.es.
5.2 Impact of adding duplicated generated signatures
as training
The aim of this experiment is to ascertain whether our duplicated signatures present a human-like variability through
the complementarity between them and the signatures used
to generate them. Accordingly, we have trained different
classifiers without duplicates, with duplicates and with
the same number of real signatures. Also the test used is
identical in all cases. The goal is to determine the ability of
Seed
Duplicated Signatures
Fig. 8: Examples of multiple signatures from only one original signature. The first column shows the original signature
and the rest the duplicated samples. Some details in the
signature variability are highlighted with gray spots.
0162-8828 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TPAMI.2016.2560810, IEEE
Transactions on Pattern Analysis and Machine Intelligence
JOURNAL OF LATEX CLASS FILES, VOL. 13, NO. 9, SEPTEMBER 2014
9
TABLE 3: Equal Error Rate (first) and Area Under Curve (second) results in % for the GPDS-300 Off-line Signature DB for
four validations. The baselines are shaded in gray.
Training
Random Forgery
Deliberate
R∗
D/R∗
System A
System B
System C
System D
System A
System B
2
0
8.30 – 97.13
17.10 – 92.20
2.84 – 99.58
8.88 – 96.98
34.34 – 71.30
34.19 – 72.28
2
1
8.42 – 96.91
13.12 – 95.23
2.59 – 99.64
7.53 – 97.58
33.62 – 71.88
31.80 – 74.19
2
5
7.63 – 97.25
9.71 – 97.10
1.90 – 99.79
4.65 – 98.95
33.55 – 71.73
29.20 – 77.37
2
10
7.10 – 97.59
8.63 – 97.48
1.69 – 99.84
4.22 – 99.16
33.05 – 72.30
28.67 – 77.92
2
20
6.25 – 97.87
8.04 – 97.75
1.43 – 99.88
3.81 – 99.30
32.01 – 73.67
28.55 – 78.12
5
0
4.95 – 98.63
5.58 – 98.92
0.92 – 99.93
4.38 – 99.04
29.37 – 76.78
25.57 – 82.05
5
1
5.29 – 98.64
4.09 – 99.36
0.75 – 99.95
3.42 – 99.31
30.61 – 75.88
24.91 – 82.87
5
5
4.41 – 98.87
3.01 – 99.56
0.54 – 99.98
2.41 – 99.60
28.58 – 77.77
24.19 – 83.29
5
10
4.23 – 98.89
2.76 – 99.61
0.48 – 99.98
2.19 – 99.68
28.60 – 77.51
23.82 – 83.61
5
20
4.16 – 99.00
2.64 – 99.64
0.36 – 99.98
1.83 – 99.73
27.86 – 78.57
24.04 – 83.39
8
0
4.80 – 98.78
4.01 – 99.33
0.57 – 99.97
2.85 – 99.53
29.03 – 77.63
23.96 – 84.10
8
1
4.85 – 98.80
3.19 – 99.57
0.45 – 99.98
2.65 – 99.54
28.89 – 77.96
23.32 – 84.38
8
5
4.02 – 99.00
2.46 – 99.74
0.29 – 99.99
2.01 – 99.65
27.82 – 78.85
23.54 – 84.43
8
10
4.19 – 99.00
2.29 – 99.75
0.22 – 99.99
1.81 – 99.72
27.10 – 79.76
23.11 – 84.51
8
20
4.28 – 99.03
2.78 – 99.62
0.20 – 99.99
1.34 – 99.82
26.60 – 80.29
20.39 – 87.19
∗
R means the real enrolled signatures and D/R means the duplicated per real enrolled signatures.
the cognitive approach to generate samples with complementary information about the signature owner, especially
in critical conditions when only a few signatures for training
are available.
The essential part of this section focuses on demonstrating that the progressive addition of duplicated signatures to
a training set improves the system performance, highlighting the cognitive-based signature duplicator’s capability of
working out human-like variability in signature verification.
This concrete finding has been demonstrated by testing two
off-line publicly available databases and four state-of-theart off-line signature verifiers, as is mentioned in Section 4.
Table 3 and Table 4 analyze the resulting performance, both
in terms of Equal Error Rate and Area Under Curve.
“GPDS-300 and System A”. We could observe the same
tendency in both random and deliberate forgery tests: the
more duplicates we use, the better the results. This highlights the potential of the duplicated signatures in both tests,
but the biggest improvement is for deliberate forgeries.
“GPDS-300 and System B”. Excellent improvements in the
verification rates are achieved, especially in the case where
we have few signatures to train the model and for deliberate
forgeries. Note that, the deliberate forgery test is the most
critical one for ASV systems. Hence these results highlight
the potential of this duplicator for security or forensic applications. Moreover, it is worth taking into consideration how
close the performance of 5 R + 20 D/R is to that for 8 R +
0 D/R, which suggest that the duplicates are equivalent to
three real signatures.
“GPDS-300 and System C”. This case confirms the hypothesis that for whatever traditional training set, the GPDS
database provides a significant improvement in performance. Again, the most significant improvements are observed when the training models have a few signatures. But
it is relevant to say that although we use 8 real signatures
to train it, the cognitive model itself still benefits in all cases
from additional information. Moreover, it is observed that
this method is able to achieve a promising performance with
only 8 real signatures.
“GPDS-300 and System D”. The robustness of our duplicating procedure is successfully proved with this fourth
Forgery
System C
24.86 – 82.74
25.11 – 82.69
23.70 – 84.16
22.68 – 85.32
21.63 – 86.46
20.91 – 86.95
19.99 – 87.66
18.98 – 88.73
17.98 – 89.70
17.19 – 90.39
19.02 – 88.96
18.21 – 89.46
16.45 – 91.29
15.08 – 92.17
14.58 – 92.72
System D
28.87 – 78.24
29.42 – 78.11
26.88 – 80.64
25.73 – 81.74
25.95 – 82.06
24.69 – 82.64
24.36 – 83.50
23.25 – 84.93
22.90 – 84.91
22.57 – 85.36
22.16 – 85.48
22.16 – 85.81
21.52 – 86.07
21.60 – 86.57
19.97 – 88.08
Fig. 9: ROC plots training with the first 5 real signatures plus
the duplicates for GPDS-300.
system since it was not designed by the authors of this
article. The relevant improvement with the three different
training sets confirms the utility of the cognitive duplication
method to outperform the traditional method. The most
relevant effect is observed in the case of 2 real enrolled
signatures, where the performance improvement is five 5
percent for the random forgery test.
In Fig. 9, we show, as an example for a model composed
of 5 real signatures, the two errors (FAR and FRR) in a ROC
plot for each system and each test. This curve shows the real
behavior in all operative points. The general observation in
the middle case of 5 real signatures again demonstrates how
the improvement is not only at the operational point, but at
all points of the ROC curves.
Similar findings can be assessed in Table 4 for MCYT:
“MCYT-75 and System A”. The most notable improvements are for the training set with 2 real signatures for the
random forgery test and for 5 real ones for the deliberate
forgery test, duplicating at 20 times per real enrolled specimen. The table in this case shows consistent results which
verify the targeted goal.
“MCYT-75 and System B”. Excellent results are observed
here. We could highlight the effect in the random forgery
test where both the EER and AUC indicate the maximum
0162-8828 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TPAMI.2016.2560810, IEEE
Transactions on Pattern Analysis and Machine Intelligence
JOURNAL OF LATEX CLASS FILES, VOL. 13, NO. 9, SEPTEMBER 2014
10
TABLE 4: Equal Error Rate (first) and Area Under Curve (second) results in % for the MCYT-75 Off-line Signature DB for
four validations. The baselines are gray.
Training
Random Forgery
Deliberate
R∗
D/R∗
System A
System B
System C
System D
System A
System B
2
0
6.84 – 97.81
17.42 – 93.08
1.95 – 99.77
3.09 – 99.29
21.40 – 86.35
34.36 – 73.04
2
1
5.31 – 98.51
7.78 – 98.39
1.77 – 99.80
3.44 – 99.31
22.94 – 83.83
30.84 – 76.53
2
5
3.51 – 99.22
1.39 – 99.93
1.69 – 99.86
2.36 – 99.58
20.37 – 86.94
25.20 – 82.58
2
10
3.59 – 99.03
0.17 – 100.00
1.29 – 99.92
1.72 – 99.73
20.09 – 87.16
24.73 – 83.74
2
20
3.05 – 99.25
0.17 – 100.00
0.69 – 99.96
1.11 – 99.81
19.03 – 87.86
23.67 – 84.44
5
0
3.18 – 99.11
6.09 – 99.08
0.81 – 99.96
1.31 – 99.81
18.18 – 90.53
24.26 – 83.84
5
1
2.24 – 99.35
1.81 – 99.92
0.89 – 99.95
1.25 – 99.83
17.17 – 90.44
20.99 – 86.41
5
5
2.01 – 99.53
0.00 – 100.00
0.48 – 99.99
0.81 – 99.91
16.08 – 91.50
19.02 – 88.32
5
10
2.11 – 99.56
0.19 – 100.00
0.22 – 100.00
0.50 – 99.94
16.35 – 91.50
20.15 – 88.25
5
20
2.26 – 99.57
0.00 – 100.00
0.32 – 100.00
0.34 – 99.96
15.27 – 91.63
16.58 – 90.09
8
0
1.78 – 99.72
1.41 – 99.93
0.36 – 99.99
0.37 – 99.96
13.42 – 93.50
15.13 – 92.08
8
1
1.90 – 99.72
0.36 – 99.99
0.25 – 99.99
0.50 – 99.96
14.32 – 93.02
14.62 – 92.07
8
5
1.56 – 99.81
0.19 – 100.00
0.23 – 100.00
0.32 – 99.97
14.34 – 92.71
14.72 – 92.24
8
10
1.20 – 99.86
0.38 – 100.00
0.09 – 100.00
0.31 – 99.98
13.06 – 93.73
16.11 – 91.34
8
20
1.06 – 99.90
0.38 – 100.00
0.14 – 100.00
0.30 – 99.98
12.02 – 94.08
15.26 – 91.61
∗
R means the real enrolled signatures and D/R means the duplicated per real enrolled signatures.
performance for 5 real signatures. An anomaly in our results
occurs at an outlier which produces a strange performance
(EER=0.19 %) in the random forgery test with 5 R + 10 D/R.
As we said in Section 4, the FRR error is computed with
525 scores. Failure here implies a value of 1/525 = 0.0019
(or 0.19 %). Such a value in the table indicates that only
one genuine signature was incorrectly classified. Note that,
statistically speaking, this inconsistency is irrelevant to the
impact of the duplicator so we do not need to adjust our
conclusions. Moreover, the most relevant improvement is
due to the training set with 2 real enrolled samples. It
highlights the convenience of this method in criminology
and forensics, where only one or two genuine samples may
be available. In this case, we can observe for deliberate
forgeries - the most difficult and relevant test - the notable
improvement when only 2 real signatures are enrolled.
“MCYT-75 and System C”. Again, this system tends to
reinforce the hypothesis that the proposed duplicator is
robust enough to categorize the intra-personal variability.
It is worth pointing out that under experimental conditions,
using an objectively fair protocol, impressive performance
is achieved with 5 R + 20 D/R of the deliberate forgeries.
“MCYT-75 and System D”. The potential of the cognitive
method is demonstrated with this system. The best impact
is achieved with deliberate forgeries using only two real
signatures in the training set whereas the minimum impact
is observed when the performance is still quite competitive,
as in the case of 8 real enrolled signatures in the random
forgeries test.
Graphically, the case of 5 real enrolled signatures is summarized at Fig. 10 using a ROC plot. Beyond the operative
points, once again, we can see the significant and consistent
improvement in all cases.
These experiments attempt to validate the robustness
of our verifier for experiments in respect to each database
where, at zero human cost and in all assessed conditions, the
duplicator outperforms the baseline. As general tendency,
our experimental results lead to the conclusion that 20
duplicated specimens have a similar effect, in performance
average terms, to 3 real signatures. In one particular case,
in the random forgery test, 5 real specimens in the training
Forgery
System C
16.88 – 89.98
17.90 – 88.82
17.63 – 89.67
16.70 – 90.38
16.06 – 91.47
14.06 – 93.25
13.58 – 93.19
12.44 – 94.16
11.99 – 94.82
11.90 – 95.05
11.05 – 95.06
11.61 – 95.45
9.72 – 96.34
9.78 – 96.66
9.12 – 97.01
System D
21.15 – 87.45
20.54 – 88.28
18.41 – 89.48
16.88 – 90.88
16.50 – 91.50
16.42 – 91.40
15.71 – 92.30
15.17 – 92.63
14.19 – 93.65
14.02 – 93.83
12.90 – 93.49
12.76 – 94.05
12.43 – 94.74
12.57 – 94.69
11.57 – 94.92
Fig. 10: ROC plots training with the first 5 real signatures
plus the duplicates for MCYT-75.
set and System B is equivalent to adding more than 5 real
signatures from both databases.
It should be noted that the feature extraction in each ASV
and the classifier are completely different. This explains the
different sensitivities in each case. Nevertheless, the sensitivities always have a positive effect on the performance,
which sustains the hypothesis that the cognitive inspired
method is able to introduce certain intra-class variability to
the systems. Actually, the controlled deformation allows us
to enlarge and adapt better the model boundaries for each
classifier and for each kind of training model.
5.3
Impact on the performance of the individual signers
This experiment aims to quantify how the addition of duplicated signatures affects the performance of an individual
user. We thus evaluate the number of users which obtain
benefit in performance when duplicated signatures are used
as the training set alongside the real ones.
Previous sections suggest that the more duplications we
use, the lower is the error rate obtained. Therefore, the
comparison will be conducted with no real duplicates in
the model and duplicating 20 times each enrolled signature
image. The difference in the performance obtained will
indicate how the duplicator affects the individual user’s
0162-8828 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TPAMI.2016.2560810, IEEE
Transactions on Pattern Analysis and Machine Intelligence
JOURNAL OF LATEX CLASS FILES, VOL. 13, NO. 9, SEPTEMBER 2014
11
TABLE 5: Proportion of beneficiary users per unit for GPDS-300 and MCYT-75 databases.
System A
System B
System C
System D
EER Differences GPDS-300
Random Forgery
Deliberate Forgery
2 R+20 D/R
5 R+20 D/R
8 R+20 D/R
2 R+20 D/R
5 R+20 D/R
8 R+20 D/R
0.79
0.78
0.80
0.71
0.66
0.74
0.89
0.95
0.88
0.79
0.72
0.74
0.93
0.96
0.98
0.77
0.81
0.83
0.94
0.90
0.86
0.69
0.65
0.67
EER Differences MCYT-75
Random Forgery
Deliberate Forgery
2 R+20 D/R
5 R+20 D/R
8 R+20 D/R
2 R+20 D/R
5 R+20 D/R
8 R+20 D/R
System A
0.89
0.85
0.91
0.68
0.75
0.80
System B
1.00
1.00
0.99
0.71
0.67
0.55
System C
0.96
1.00
1.00
0.72
0.81
0.81
System D
0.92
0.95
0.88
0.85
0.84
0.75
∗
R means the real enrolled signatures and D/R means the duplicated per real enrolled signatures.
Fig. 11: Users arranged by EER difference improvement for
the GPDS-300.
Fig. 12: Users arranged by EER difference improvement for
MCYT-75.
performance. We have employed the Equal Error Rate metric for this experiment. Thus, the users are arranged in
ascending order according to the differences of EER between
the performance training with and without duplicates. So,
an analysis for each database under the proposed evaluation
methodology, as explained in Section 4, is carried out.
“GPDS-300”. The random forgery improvements are
over 85 % in almost all cases, as shown at Fig. 11. It is also
noted that Systems B and C achieve the most improved
results. Also, the fewer real signatures we include in the
training set, the bigger the individual performance increase
we obtain. The number of successful users is similar in all
cases. It is reasonable to deduce that with two real signatures
the improvement will be even greater. In this case, the duplicator model contribution would be more efficient. Additionally, modest but very promising improvements are obtained
in the deliberate forgery test. We can see that more than 75 %
of users improve their individual performance. In this test, it
is not the findings on the fewer real enrolled images that are
important but the performance of the individuals. In these
cases, the individual improvement is quite stable, achieving
with Systems B and C slightly better improvement than with
Systems A and D.
“MCYT-75”. Again, the random forgery appears to offer
the best test where the improvement is practically at the
maximum per user, especially for Systems B and C, as can
be seen in Fig. 12. Moreover, this reinforces the finding
that the fewer real signature images enrolled, the higher
the proportion of improvement per individual user. Again
in the deliberate forgery test, the average performance improvement applies to around of 75 % of users. System B
shares such a finding for random forgery test but this does
not apply to Systems A and C.
Finally, Table 5 indicates numerically the proportion of
the users which have a higher performance when duplicated
signatures are used to train their models. In spite of the fact
that the performance of a few users could be degraded, these
results suggest that it is more effective to use the proposed
cognitive duplicator in signature verification and to produce
as many signatures as our computer resources permit.
5.4
Comparison with other duplication methods
Duplication procedures have been widely used in signature
verification to enlarge artificially the training set. While
many different proposals have contributed to the stateof-the-art with on-line signatures, affine transformation is
the most popular approach to duplicating the image-based
signatures. To compare our cognitive duplication method
0162-8828 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TPAMI.2016.2560810, IEEE
Transactions on Pattern Analysis and Machine Intelligence
JOURNAL OF LATEX CLASS FILES, VOL. 13, NO. 9, SEPTEMBER 2014
12
TABLE 6: Equal Error Rate (first) and Area Under Curve (second) results in % for comparison between Affine and Cognitive
duplication methods for GPDS-300 and MCYT-75 databases. The baselines are shaded in gray.
Training
R∗
D/R∗
2
0
2
20
5
0
5
20
8
0
8
20
baseline
affine
cognitive
baseline
affine
cognitive
baseline
affine
cognitive
System A
8.30 – 97.13
7.57 – 97.24
6.25 – 97.87
4.95 – 98.63
5.67 – 98.42
4.16 – 99.00
4.80 – 98.78
5.04 – 98.77
4.28 – 99.03
GPDS-300
Random Forgery
System B
System C
System D
17.10 – 92.20
2.84 – 99.58
8.88 – 96.98
5.85 – 98.74
2.51 – 99.69
4.17 – 99.15
8.04 – 97.75
1.43 – 99.88
3.53 – 99.31
5.58 – 98.92
0.92 – 99.93
4.38 – 99.04
1.50 – 99.86
0.89 – 99.94
2.21 – 99.67
2.64 – 99.64
0.36 – 99.98
1.67 – 99.78
4.01 – 99.33
0.57 – 99.97
2.85 – 99.53
1.41 – 99.89
0.62 – 99.96
1.54 – 99.81
2.78 – 99.62
0.20 – 99.99
1.04 – 99.89
System A
34.34 – 71.30
33.12 – 72.57
32.01 – 73.67
29.37 – 76.78
29.99 – 75.95
27.86 – 78.57
29.03 – 77.63
28.28 – 78.74
26.60 – 80.29
MCYT-75
Random Forgery
System A
System B
System C
System D
System A
baseline
6.84 – 97.81
17.42 – 93.08
1.95 – 99.77
3.09 – 99.29
21.40 – 86.35
affine
5.15 – 98.31
1.25 – 99.92
0.87 – 98.50
1.51 – 99.73
21.71 – 85.42
2
20
cognitive
3.05 – 99.25
0.17 – 100.00
0.69 – 99.96
1.11 – 99.81
19.03 – 87.86
5
0
baseline
3.18 – 99.11
6.09 – 99.08
0.81 – 99.96
1.31 – 99.81
18.18 – 90.53
affine
3.69 – 99.09
0.38 – 100.00
0.25 – 99.40
0.38 – 99.95
17.58 – 89.51
5
20
cognitive
2.26 – 99.57
0.00 – 100.00
0.32 – 100.00
0.34 – 99.96
15.27 – 91.63
8
0
baseline
1.78 – 99.72
1.41 – 99.93
0.36 – 99.99
0.37 – 99.96
13.42 – 93.50
affine
2.18 – 99.60
0.00 – 100.00
0.27 – 99.75
0.16 – 99.98
15.54 – 91.87
8
20
cognitive
1.06 – 99.90
0.38 – 100.00
0.14 – 100.00
0.30 – 99.98
12.02 – 94.08
∗
R means the real enrolled signatures and D/R means the duplicated per real enrolled signatures.
**In bold the cognitive duplicate method results extracted from Table 3 and Table 4.
Training
R∗
D/R∗
2
0
with the state-of-the-art, we have duplicated the real enrolled signatures with affine transformations according to
the method described in [10]. Following the same evaluation
methodology described in Section 4, the experiments have
been repeated and results are given at Table 6, where each
real enrolled signature is duplicated twenty times.
Experimental results show a better performance in all
cases for System A. In favor of the cognitive duplicate
method, the performance with the affine-based method
is less competitive and gets worse than the baseline in
some cases. Similar tendencies are observed on system B
when random forgeries are tested, in spite of the fact of
exceptional improvements in the affine-based method with
GPDS-300. However, the importance of our proposal is
highlighted in the deliberate forgery test for both databases.
In addition, some confusion seem to be introduced in the deliberate forgery test with the affine-based method, in some
cases, since the performance is worse than the baseline.
On Systems C and D our method outperforms in all cases
except in a few where the affine proposal slightly improves
the results. Thus, we can still claim that the intra-personal
variability produced in the repetitions of signatures from the
same writer has been successfully achieved by cognitive distortion. To conclude this section, we do not pretend to claim
that the cognitive perturbations cover all unexpected real
intra-personal variability. Although our proposal satisfactorily matches the signature variability, further distortions
like the affine-based among others may be considered to be
combined for further improvements.
6
C ONCLUSION
This paper proposes an off-line signature duplicator based
on the cognitively inspired principles of the equivalence
model of human motor function. Although some initial
Deliberate
System B
34.19 – 72.28
33.10 – 73.15
28.55 – 78.12
25.57 – 82.05
28.69 – 77.72
24.04 – 83.39
23.96 – 84.10
28.03 – 78.72
20.39 – 87.19
Forgery
System C
24.86 – 82.74
23.22 – 84.40
21.63 – 86.46
20.91 – 86.95
19.76 – 88.13
17.19 – 90.39
19.02 – 88.96
17.42 – 90.51
14.58 – 92.72
System D
28.87 – 78.24
25.09 – 82.94
25.01 – 82.71
24.69 – 82.64
21.34 – 86.72
21.68 – 86.62
22.16 – 85.48
19.11 – 88.76
18.66 – 89.07
Deliberate
System B
34.36 – 73.04
25.14 – 81.65
23.67 – 84.44
24.26 – 83.84
21.01 – 85.94
16.58 – 90.09
15.13 – 92.08
18.62 – 88.93
15.26 – 91.61
Forgery
System C
16.88 – 89.98
16.71 – 90.28
16.06 – 91.47
14.06 – 93.25
12.34 – 94.46
11.90 – 95.05
11.05 – 95.06
9.45 – 96.06
9.12 – 97.01
System D
21.15 – 87.45
18.27 – 89.99
16.50 – 91.50
16.42 – 91.40
13.94 – 93.52
14.02 – 93.83
12.90 – 93.49
12.76 – 94.54
11.57 – 94.92
proposals have been published on on-line signatures, to the
best of our knowledge this paper presents a novel method
for cognitive duplication of off-line signatures.
The most relevant literature suggests that this model can
be divided into the effector dependent and effector independent functions. Although they are considered separately, the
duplicated signature image integrates the actions of each
effector.
The duplication algorithm is based on three main signature image modifications: First, intra-component modification which applies a sinusoidal distortion to the signature image. This includes both effector independent user
cognitive spatial map variability and effector dependent
motor variability which is interpreted as modification of
the individual pen strokes: i.e. in the individual stroke
amplitudes in the case of the on-line signature. Secondly,
inter-component modifications which displace every unconnected component. This variability, besides effector dependent and independent variability, includes some pen-up end
point trajectory variability. Finally, inclination modification
that incorporates aspects such as different pose, sheet inclination, pen inclination, etc.
Our hypothesis relies on the idea that the duplicated
signatures under a cognitive perspective are able to model
better the intra-personal variability of a signer. This hypothesis is evaluated by studying the improvement in the
performance of off-line signature recognition systems when
duplicated specimens are provided to the training set. The
results obtained in this work suggest that independently of
the number of real signatures we introduce, the performance
of the system improves in all cases when the cognitive-based
duplicator is used. As result, signatures from our duplicator
consistently improve the performance of four generative
classifiers, which were used with their original configura-
0162-8828 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TPAMI.2016.2560810, IEEE
Transactions on Pattern Analysis and Machine Intelligence
JOURNAL OF LATEX CLASS FILES, VOL. 13, NO. 9, SEPTEMBER 2014
tion without being adapted to any dataset. These automatic
signature verifiers are completely different among themselves in the type of selected features and in the classifiers
employed. Additionally, to generalize our conclusions, the
experiments were repeated in two well-known publicly offline signature databases: GPDS-300 and MCYT-75.
The experiments confirm that by using duplicated signatures, alongside the genuine ones, to train the systems,
we can improve the global performance of the systems.
Moreover, we have examined the performance improvement per user. The results suggest that more than 75 % of the
users benefit from an improvement in performance when
a duplicated signature technique is employed. The results
suggest that our method retains the signer intra-personal
variability, and improves the identification of the user by
the classifier and is equivalent to enrolling three more real
signatures, in average terms.
Consequently, it could be said that this method is simple,
fast, efficient and useful for generative classifiers for effectively zero human effort and at only the cost of machine
time. This novel duplicate generator has high potential
in signature-based biometric applications, especially in resolving security problems or in cases where only a few
signatures are enrolled. These and other applications can
be exploited by others since the duplicator is shared freely
at www.gpds.ulpgc.es.
Although good results have been achieved here, there
is more work to be done on the duplicator for off-line
signatures in order to take advantage of its productivity.
The duplicator can be adapted to different users to improve
their individual performance. So, one way of exploiting the
productivity of the duplicator may be to change its configuration for each user: the larger the intra-personal variability,
the wider the limits will be. In this field, it may be worthy
defining a more convenient strategy to duplicate signatures.
As such, we will be keen to work on user-dependent parameter optimization to obtain better results. As a strategy,
we plan to investigate an evolutionary algorithm whose
objective function will be related to the Hellinger distance
and thus to verifying the visual result of the duplication.
Finally, further research on the pseudo-dynamic extraction
of the off-line signature could be an important advance in
simulating the effector dependent function.
13
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
ACKNOWLEDGMENTS
We would like to thank Elias N. Zois for providing us with
the System D code. M. D. is supported by a PhD fellowship
from the ULPGC. This study was funded by the Spanish governments MCINN TEC2012-38630-C04-02 research
project and European Union FEDER program/funds.
[20]
[21]
[22]
[23]
R EFERENCES
[1]
[2]
[3]
R. Plamondon and G. Lorette, “Automatic signature verification
and writer identification the state of the art,” Pattern Recogn.,
vol. 22, no. 2, pp. 107 – 131, 1989.
F. Leclerc and R. Plamondon, “Automatic signature verification:
The state of the art1989–1993,” Int. Journal of Pattern Recogn. and
Artificial Intelligence, vol. 8, no. 03, pp. 643–660, 1994.
M. Fairhurst, “Signature verification revisited: Promoting practical
exploitation of biometric technology,” Electronics and Communication Engineering Journal, vol. 9, no. 6, pp. 273–280, 1997.
[24]
[25]
[26]
R. Plamondon and S. N. Srihari, “On-line and off-line handwriting recognition: A comprehensive survey,” IEEE Transactions on
Pattern Analysis and Machine Intelligence, vol. 22, no. 1, pp. 63–84,
2000.
J. Fierrez and J. Ortega-Garcia, On-line signature verification.
Springer, 2008, pp. 189–209.
D. Impedovo, G. Pirlo, and R. Plamondon, “Handwritten signature verification: New advancements and open issues,” in Int.
Conf. on Frontiers in Handwriting Recogn., (ICFHR), 2012, pp. 367–
372.
M. Diaz-Cabrera, A. Morales, and M. A. Ferrer, “Emerging issues
for static handwritten signature biometric,” in Advances in Digital
Handwritten Signature Processing. A Human Artefact for e-Society,
2014, pp. 111–122.
L. G. Hafemann, R. Sabourin, and L. S. Oliveira, “Offline handwritten signature verification - literature review,” CoRR, vol.
abs/1507.07909, 2015.
M. Munich and P. Perona, “Visual identification by signature
tracking,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 2, pp. 200–217, 2003.
E. Frias-Martinez, A. Sanchez, and J. Velez, “Support vector machines versus multi-layer perceptrons for efficient off-line signature recognition,” Engineering Applications of Artificial Intelligence,
vol. 19, no. 6, pp. 693 – 704, 2006.
C. Rabasse, R. Guest, and M. Fairhurst, “A new method for
the synthesis of signature data with natural variability,” IEEE
Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics,
vol. 38, no. 3, pp. 691–699, 2008.
J. Galbally, J. Fierrez, M. Martinez-Diaz, and J. Ortega-Garcia,
“Improving the enrollment in dynamic signature verfication with
synthetic samples,” in 10th Int. Conf. on Document Analysis and
Recogn. (ICDAR), 2009, pp. 1295–1299.
M. Song and Z. Sun, “An immune clonal selection algorithm for
synthetic signature generation,” Mathematical Problems in Engineering, vol. 2014, pp. 1–12, 2014.
M. Diaz, A. Fischer, R. Plamondon, and M. A. Ferrer, “Towards an
automatic on-line signature verifier using only one reference per
signer,” in Proc. IAPR Int. Conf. on Document Analysis and Recogn.
(ICDAR), 2015, pp. 631–635.
M. A. Ferrer, M. Diaz-Cabrera, A. Morales, J. Galbally, and
M. Gomez-Barrero, “Realistic synthetic off-line signature generation based on synthetic on-line data,” in Proc. IEEE Int. Carnahan
Conf. on Security Technology, ICCST, 2013, pp. 116–121.
R. Guest, O. Hurtado, and O. Henniger, “Assessment of methods
for image recreation from signature time-series data,” IET Biometrics, vol. 3, no. 3, pp. 159–166, 2014.
M. Diaz-Cabrera, M. Gomez-Barrero, A. Morales, M. A. Ferrer, and
J. Galbally, “Generation of enhanced synthetic off-line signatures
based on real on-line data,” in Proc. IAPR Int. Conf. on Frontiers in
Handwriting Recogn., ICFHR, 2014, pp. 482–487.
J. Galbally, M. Diaz-Cabrera, M. A. Ferrer, M. Gomez-Barrero,
A. Morales, and J. Fierrez, “On-line signature recognition through
the combination of real dynamic data and synthetically generated
static data,” Pattern Recognition, vol. 48, pp. 2921 – 2934, 2015.
M. Diaz-Cabrera, M. Ferrer, and A. Morales, “Cognitive inspired
model to generate duplicated static signature images,” in Int. Conf.
on Frontiers in Handwriting Recogn. (ICFHR), 2014, pp. 61–66.
A. M. Wing, “Motor control: Mechanisms of motor equivalence in
handwriting,” Current Biology, vol. 10, no. 6, pp. R245 – R248, 2000.
K. S. Lashley, “Basic neural mechanisms in behavior,” Psychological
Review, vol. 37, no. 1, pp. 1–24, 1930.
D. O. Hebb, The Organization of Behavior: A Neuropsychological
Theory. New York: Wiley and Sons, 1949.
N. A. Bernstein, The Co-Ordination and Regulation of Movements.
Oxford, U.K.: Pergamon, 1967.
A. Marcelli, A. Parziale, and R. Senatore, “Some observations on
handwriting from a motor learning perspective.” in 2nd Workshop
on Automated Forensic Handwriting Analysis (AFHA), 2013, pp. 6–10.
M. Ferrer, M. Diaz-Cabrera, and A. Morales, “Static signature
synthesis: A neuromotor inspired approach for biometrics,” IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 37,
no. 3, pp. 667–680, 2015.
M. Kawato, “Internal models for motor control and trajectory
planning,” Current Opinion in Neurobiology, vol. 9, no. 6, pp. 718
– 727, 1999.
0162-8828 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TPAMI.2016.2560810, IEEE
Transactions on Pattern Analysis and Machine Intelligence
JOURNAL OF LATEX CLASS FILES, VOL. 13, NO. 9, SEPTEMBER 2014
[27] T. Hafting, M. Fyhn, S. Molden, M.-B. Moser, and E. I. Moser,
“Microstructure of a spatial map in the entorhinal cortex,” Nature,
vol. 436, no. 7052, pp. 801–806, 2005.
[28] C. O’Reilly and R. Plamondon, “Development of a sigmalognormal representation for on-line signatures,” Pattern Recogn.,
vol. 42, no. 12, pp. 3324–3337, 2009.
[29] A. O. Thomas, A. Rusu, and V. Govindaraju, “Synthetic handwritten {CAPTCHAs},” Pattern Recogn., vol. 42, no. 12, pp. 3365 –
3373, 2009.
[30] V. Nguyen and M. Blumenstein, “Techniques for static handwriting trajectory recovery: A survey,” in Proceedings of the 9th IAPR
Int. Workshop on Document Analysis Systems, 2010, pp. 463–470.
[31] M. Liwicki, Y. Akira, S. Uchida, M. Iwamura, S. Omachi, and
K. Kise, “Reliable online stroke recovery from offline data with
the data-embedding pen,” in Int. Conf. on Document Analysis and
Recogn. (ICDAR), 2011, pp. 1384–1388.
[32] M. Liwicki, S. Uchida, A. Yoshida, M. Iwamura, S. Omachi, and
K. Kise, “More than ink realization of a data-embedding pen,”
Pattern Recogn. Letters, vol. 35, no. 0, pp. 246 – 255, 2014.
[33] S. Lee and J. Pan, “Handwritten numeral recognition based
on hierarchically self-organizing learning networks with spatiotemporal pattern representation,” in IEEE Computer Society Conf.
on Computer Vision and Pattern Recogn. (CVPR), 1992, pp. 176–182.
[34] ——, “Offline tracing and representation of signatures,” IEEE
Transactions on Systems, Man and Cybernetics, vol. 22, no. 4, pp.
755–771, 1992.
[35] R. Plamondon and C. Privitera, “The segmentation of cursive
handwriting: an approach based on off-line recovery of the motortemporal information,” IEEE Transactions on Image Processing,
vol. 8, no. 1, pp. 80–91, 1999.
[36] D. Doermann, N. Intrator, E. Rivin, and T. Steinherz, “Hidden
loop recovery for handwriting recognition,” in 8th Int. Workshop
on Frontiers in Handwriting Recogn., 2002, pp. 375–380.
[37] H. Fu, S. Zhou, L. Liu, and N. Mitra, “Animated construction
of line drawings,” ACM Trans. Graph. (Proceedings of ACM SIGGRAPH ASIA), vol. 30, no. 6, 2011.
[38] N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Transactions on Systems, Man and Cybernetics, vol. 9,
no. 1, pp. 62–66, 1979.
[39] M. Ferrer, J. Vargas, A. Morales, and A. Ordonez, “Robustness of
offline signature verification based on gray level features,” IEEE
Transactions on Information Forensics and Security, vol. 7, no. 3, pp.
966–977, 2012.
[40] R. M. Haralick and L. G. Shapiro, Computer and Robot Vision, 1st ed.
Boston, MA, USA: Addison-Wesley Longman Publishing Co., Inc.,
1992.
[41] S. Kotz and S. Nadarajah, Extreme Value Distributions: Theory and
Applications. London: Imperial College Press, 2000.
[42] M. Diaz-Cabrera, M. A. Ferrer, and A. Morales, “Modeling the
lexical morphology of western handwritten signatures,” PLoS
ONE, vol. 10, no. 4, p. e0123254, 2015.
[43] M. Blumenstein, M. Ferrer, and J. Vargas, “The 4nsigcomp2010 offline signature verification competition: Scenario 2,” in Int. Conf. on
Frontiers in Handwriting Recogn., (ICFHR), 2010, pp. 721–726.
[44] J. Ortega-Garcia, J. Fierrez-Aguilar, D. Simon, J. Gonzalez,
M. Faundez, V. Espinosa, A. Satue, I. Hernaez, J. J. Igarza, C. Vivaracho, D. Escudero, and Q. I. Moro, “MCYT baseline corpus:
A bimodal biometric database,” IEE Proceedings Vision, Image and
Signal Processing, vol. 150, no. 6, pp. 395–401, 2003.
[45] M. Ferrer, J. Alonso, and C. Travieso, “Offline geometric parameters for automatic signature verification using fixed-point
arithmetic,” IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 27, no. 6, pp. 993–997, 2005.
[46] G. S. Eskander, R. Sabourin, and E. Granger, “Hybrid writerindependent–writer-dependent offline signature verification system,” IET-Biometrics Journal, vol. 2, no. 4, pp. 169–181, 2013.
[47] E. N. Zois, L. Alewijnse, and G. Economou, “Offline signature
verification and quality characterization using poset-oriented grid
features,” Pattern Recognition, vol. 54, pp. 162 – 177, 2016.
[48] J. Fierrez-Aguilar, J. Ortega-Garcia, and J. Gonzalez-Rodriguez,
“Target dependent score normalization techniques and their application to signature verification,” IEEE Transactions on Systems,
Man, and Cybernetics, Part C: Applications and Reviews, vol. 35, no. 3,
pp. 418–425, 2005.
[49] A. K. Jain and A. A. Ross, “Learning user-specific parameters in
a multibiometric system.” in Int. Conf. on Image Processing (ICIP),
2002, pp. 57–60.
14
Moises Diaz received two M. Tech degrees in
2010: Industrial Engineering and Industrial Electronics and Automation Engineering and holds
a M.Sc. in Intelligent Systems and Numerical
Applications in Engineering (2011) as well as a
M.Ed. in Secondary Education (2013), all from
La Universidad de Las Palmas de Gran Canaria.
He is currently pursuing a Ph.D. degree and
his research areas include handwritten signature
recognition, pattern recognition and vision applications to Intelligent Transportation Systems.
He has been a visiting student researcher at the Vislab group, at the
University of Parma in Italy, the University of Bary in Italy, the École
Polytechnique de Montréal in Canada, the University of Fribourg in
Switzerland and the Indian Statistical Institute, Kolkata.
Miguel A. Ferrer received the M.Sc. degree in
telecommunications, in 1988, and his Ph.D. degree, in 1994, each from the Universidad Politcnica de Madrid, Spain. He belongs to the Digital
Signal Processing research group (GPDS) of the
research institute for technological development
and Communication Innovation (IDeTIC) at the
University of Las Palmas de Gran Canaria in
Spain where since 1990 he has been an Associate Professor. His research interests lies in
the fields of computer vision, pattern recognition,
biometrics, mainly those based on hand and handwriting, audio quality,
mainly for health and condition machinery evaluation and vision applications to fisheries and aquaculture.
George S. Eskander obtained a Ph.D. in
Engineering from the École de Technologie
Supérieure, Université du Québec in 2014,
where he is presently a post-doctoral researcher
at the département de génie de la production automatiseé. Since May 2009, he has been working on the pattern recognition area in the Laboratoire d’imagerie, de vision et d’intelligence
artificielle (LIVIA), and his main research interests are biometrics, handwritten signature verification, and bio-cryptosystems.
Robert Sabourin joined the physics department of the Montreal University in 1977 where
his main contribution was the design and the
implementation of a microprocessor-based fine
tracking system combined with a low-light level
CCD detector. In 1983, he joined the staff of
the École de Technologie Supérieure, Université
du Québec, in Montréal where he co-founded
the Dept. of Automated Manufacturing Engineering where he is currently Full Professor and
teaches Pattern Recognition, Evolutionary Algorithms, Neural Networks and Fuzzy Systems. In 1992, he joined also the
Computer Science Department of the Pontifı́cia Universidade Católica
do Paraná (Curitiba, Brazil). Since 1996, he is a senior member of the
Centre for Pattern Recognition and Machine Intelligence (CENPARMI,
Concordia University). Since 2012, he is the Research Chair holder specializing in Adaptive Surveillance Systems in Dynamic Environments.
Dr. Sabourin is the author (and co-author) of more than 300 scientific
publications including journals and conference proceeding. He was cochair of the program committee of CIFED’98, in Québec, Canada and
IWFHR’04 in Tokyo, Japan. He was nominated as Conference co-chair
of ICDAR’07 that has been held in Curitiba, Brazil in 2007. His research interests are in the areas of adaptive biometric systems, adaptive
surveillance systems in dynamic environments, intelligent watermarking
systems, evolutionary computation and bio-cryptography.
0162-8828 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement