feature weighted support vector machines for writer-independent on-line signature verification
DESCRIPTION
In this paper we present a novel frameworkfor writer-independent on-line signature verification. Thisframework utilises a dynamic time warping-based dichotomytransformation and a writer-specific dissimilarity normalisationtechnique, in order to obtain a robust writer-independentsignature representation in dissimilarity space. Support vectormachines are utilised for signature modelling and verification.Linear and radial basis function kernels are investigated. Inthe case of the radial basis function kernel, both conventionaland feature weighted variants are considered.We show that the non-linear kernel significantly outperformsits linear counterpart. We also show that the incorporation offeature weights into the non-linear kernel function consistentlyimproves verification proficiency.When evaluated on the Philips signature database, whichcontains 1530 genuine signatures and 3000 amateur skilledforgeries from 51 writers, we show that equal error ratesof 1.26% and 3.52% are expected when 15 and 5 genuinereference samples are considered per writer. This performanceestimate compares favourably with those of existing systemsalso evaluated on this data set. Furthermore, there is sufficientevidence to suggest that further investigation into the featureset considered, as well as the feature weighting strategy utilised,may further improve performance.R EFERENCES[1] D. Impedovo and G. Pirlo, “Automatic signature verification:the state of the art,” IEEE Transactions on Systems, Man, andCybernetics, Part C: Applications and Reviews, vol. 38, no. 5,pp. 609–635, 2008.[2] I. El-Henawy, M. Rashad, O. Nomir, and K. Ahmed, “Onlinesignature verification: State of the art,” International Journalof Computers & Technology, vol. 4, no. 2, pp. 664–678, 2013.[3] R. Plamondon and G. Lorette, “Automatic signature verifica-tion and writer identification – the state of the art,” Patternrecognition, vol. 22, no. 2, pp. 107–131, 1989.[4] J. Dolfing, E. Aarts, and J. van Oosterhout, “On-line signa-ture verification with hidden Markov models,” InternationalConference on Pattern Recognition, vol. 2, pp. 1309–1312,1998.[5] P. Le Riche, “Handwritten signature verification: a hiddenMarkov model approach,” Master’s thesis, Stellenbosch Uni-versity, 2000.[6] B. Van, S. Garcia-Salicetti, and B. Dorizzi, “Fusion of HMMslikelihood and Viterbi path for on-line signature verification,”in Biometric Authentication . Springer, 2004, pp. 318–331.[7] ——, “On using the Viterbi path along with HMM likelihoodinformation for online signature verification,” IEEE Transac-tions on Systems, Man, and Cybernetics, Part B: Cybernetics,vol. 37, no. 5, pp. 1237–1247, 2007.[8] L. Rabiner and C. Schmidt, “Application of dynamic timewarping to connected digit recognition,” IEEE Transactionson Acoustics, Speech and Signal Processing , vol. 28, no. 4,pp. 377–388, 1980.[9] J. Swanepoel and J. Coetzer, “Writer-specific dissimilaritynormalisation for improved writer-independent off-line sig-nature verification,” International Conference on Frontiers inHandwriting Recognition, pp. 391–396, 2012.[10] V. Vapnik, The Nature of Statistical Learning Theory .Springer-Verlag, 1995.[11] Y. Chang and C. Lin, “Feature ranking using linear SVM,”Causation and Prediction Challenge: Challenges in MachineLearning, vol. 2, pp. 47–57, 2008.[12] S. Zhang, M. Maruf Hossain, M. Rafiul Hassan, J. Bailey, andK. Ramamohanarao, “Feature weighted SVMs using receiveroperating characteristics,” SIAM International Conference onData Mining , pp. 497–508, 2009.[13] N. Houmani, S. Garcia-Salicetti, and B. Dorizzi, “On measur-ing forgery quality in online signatures,” Pattern Recognition ,vol. 45, no. 3, pp. 1004–1018, 2012.[14] H. Ketabdar, J. Richiardi, and A. Drygajlo, “Global featureselection for on-line signature verification,” IGS Conference,2005.[15] J. Swanepoel and J. Coetzer, “A robust dissimilarity repTRANSCRIPT
Feature Weighted Support Vector Machines for Writer-independent On-line
Signature Verification
Jacques Swanepoel and Johannes Coetzer
Department of Mathematical Sciences
Stellenbosch University
Stellenbosch, Western Cape, South Africa
{jpswanepoel,jcoetzer}@sun.ac.za
Abstract—In this paper we present a novel frameworkfor writer-independent on-line signature verification. Thisframework utilises a dynamic time warping-based dichotomytransformation and a writer-specific dissimilarity normalisationtechnique, in order to obtain a robust writer-independentsignature representation in dissimilarity space. Support vectormachines are utilised for signature modelling and verification.Linear and radial basis function kernels are investigated. Inthe case of the radial basis function kernel, both conventionaland feature weighted variants are considered.
We show that the non-linear kernel significantly outperformsits linear counterpart. We also show that the incorporation offeature weights into the non-linear kernel function consistentlyimproves verification proficiency.
When evaluated on the Philips signature database, whichcontains 1530 genuine signatures and 3000 amateur skilledforgeries from 51 writers, we show that equal error ratesof 1.26% and 3.52% are expected when 15 and 5 genuinereference samples are considered per writer. This performanceestimate compares favourably with those of existing systemsalso evaluated on this data set. Furthermore, there is sufficientevidence to suggest that further investigation into the featureset considered, as well as the feature weighting strategy utilised,may further improve performance.
Keywords-signature verification; writer-independent authen-tication; support vector machines; feature weighting;
I. INTRODUCTION
Handwritten signatures as a means of identity verification
is the most widely used behavioural biometric authentication
technique. In this paper we focus on on-line signature
verification. This problem is fundamentally different from
that of off-line signature verification (where signature data
is captured by digitising a pen-on-paper sample) in the
sense that an on-line signature is captured by means of an
electronic writing device and associated electronic writing
surface. The signature acquisition process therefore provides
spatial as well as temporal (or dynamic) information.
In recent years, the concept of writer-independent signa-
ture modelling has gained notable interest in the field of
off-line signature verification. This approach is fundamen-
tally different from traditional writer-dependent signature
modelling techniques and performs model construction in
dissimilarity space, as opposed to the more commonly
utilised feature space. The dissimilarity representation of
any questioned signature is obtained by comparing it to
a known genuine reference sample that belongs to the
claimed owner. The authentication of a questioned signature
associated with any writer is thereby reduced to a two-class
problem. Writer-independent signature modelling has been
shown to outperform the writer-dependent approach in two
key areas: (1) Since a single, universal signature model is
constructed using pooled data obtained from several different
writers, this approach successfully addresses the issue of
data scarcity i.e. an efficient model can be constructed even
when limited training samples are available; (2) Since model
training only occurs after the signature data is collected in a
controlled environment, this approach is able to facilitate
training with skilled forgeries – a property that is not
practically feasible within a writer-dependent framework.
To the best of our knowledge, no existing publications
investigate the use of a writer-independent strategy within
the context of on-line signature verification. In this paper,
we initiate such an investigation and present our findings as
a proof of concept.
A wide variety of writer-dependent on-line systems are
proposed in the literature. Comprehensive surveys of these
systems can be found in [1] and [2], whilst a historical
perspective is given in [3]. It is clear from these surveys that
many pattern recognition techniques have been successfully
employed for the purpose of on-line signature verification.
Popular feature extraction techniques include the utilisation
of function-features (such as pen position, velocity, acceler-
ation, pressure, etc.) and global parameter-features (such as
the average, minimum and maximum of the aforementioned
function-features, as well as total time duration, number
of pen up or pen down samples, etc.). Popular classifica-
tion techniques include simple distance classifiers, dynamic
programming, neural networks and hidden Markov models
(HMMs).
The HMM-based systems proposed in [4]–[7] are of
particular interest to this study, since these systems also
consider the Philips signature database for evaluation pur-
poses. Their reported results are therefore fit for comparison
to those reported for the systems presented in this paper.
2014 14th International Conference on Frontiers in Handwriting Recognition
2167-6445/14 $31.00 © 2014 IEEE
DOI 10.1109/ICFHR.2014.79
434
� � � � � � � � � � � �
� � � � � � � � � � �
�� � � � � � � � � � � �
� � � �
� � � � � � �
� � � � � � � � � � � �
� � � � � � �
� � � � � � � � �
� � �
� � � � � � � � � �
� � � � � � � � � � � � � � �
� � � � � � � � � � � !
� " � � � � � � �
Figure 1. Schematic representation of a typical system presented in this paper. The sets ST and SE denote the signatures considered for training andevaluation respectively, whilst RT and RE denote their associated genuine reference signatures.
II. SYSTEM OVERVIEW
The design of a typical system presented in this paper is
conceptually illustrated in Figure 1.
Following successful signature acquisition, both spatial
and temporal features are extracted from the captured signa-
ture data. The resulting feature set is subsequently converted
into a dissimilarity vector, by matching its associated feature
vectors to those extracted from a known genuine reference
sample. A dynamic time warping (DTW) algorithm [8]
is used for matching. The resulting dissimilarity vector
is then normalised using the writer-specific dissimilarity
normalisation technique proposed in [9].
The entire set of normalised dissimilarity vectors, ex-
tracted from both genuine (positive) and forged (negative)
samples belonging to a set of guinea-pig writers, is used
to train a support vector machine (SVM) classifier [10].
We consider linear, radial basis function (RBF) and feature
weighted RBF kernels [11], [12]. The trained SVM is finally
used to accept/reject any subsequently presented questioned
signatures.
During experimentation, system proficiency is gauged
using the equal error rate (EER) performance metric. Sig-
nature samples belonging to different writers are used for
training and evaluation purposes.
III. SIGNATURE REPRESENTATION
A. Feature extraction
Since on-line signatures are captured using specialised
hardware, several key measurements are recorded during the
signature acquisition process. For the systems presented in
this paper, these measurements are represented by the 5-tuple
(x, y, p, θx, θy), sampled at a rate of up to 200Hz [4]. After
successful recording of the signing event, the signature is
therefore represented by the T -dimensional feature vectors
x and y (horizontal and vertical pen-positions respectively),
p (axial pen-pressure), as well as θx and θy (pen-tilt in the xand y directions respectively), where T denotes the number
of sampled points.
Although these feature vectors may suffice for basic
model construction, various additional descriptors can be
derived from the captured data. For example, from the spatial
feature x it is possible to derive the dynamic features vx
and ax (velocity and acceleration, respectively, in the xdirection). Similarly, vy and ay can be obtained from y.
These dynamic features are considered valuable throughout
the literature, since they proved to be considerably more
difficult to mimic than spatial signature characteristics, even
when the forger possesses forensic expertise [4], [13].
Several additional features may also be derived from the
captured data, as detailed in e.g. [4], [7], [14]. An in-depth
investigation of supplementary features, however, is deemed
outside the scope of this paper.
For initial signature representation, the systems presented
in this paper therefore consider the T × 9 feature set
X = [p, x, y, vx, vy , ax, ay , θx, θy] , (1)
where T represents the number of samples associated with
the signature in question.
B. Dichotomy transformation
In a writer-dependent modelling framework, the extracted
feature sets (detailed in the previous section) may be used
to construct a separate signature model for each writer
enrolled into the system. In order to obtain a suitable writer-
independent representation, however, a dichotomy transfor-
mation (i.e. the process that converts the signature represen-
tation from feature space to dissimilarity space) is required.
The systems presented in this paper quantify the dis-
similarity between two feature vectors through a DTW
algorithm. DTW is particularly well suited for this task,
since: (1) it is able to calculate the distance between feature
vectors with different dimensions, as is usually the case for
on-line signature data; (2) prior to the distance calculation,
the algorithm non-linearly aligns individual features based
on their similarity, thereby compensating for minor discrep-
ancies in signature data belonging to the same writer (i.e.
intra-class variability).
Given a T (k) ×D feature set X(k), extracted from a
reference signature belonging to a specific writer, any other
T (q) ×D feature set X(q), extracted from a signature that is
claimed to belong to the same writer, can be converted into
a D-dimensional dissimilarity vector z(k,q) by calculating
the dissimilarity between each pair of corresponding feature
435
vectors as follows,
z(k,q) =
D⋃d=1
D(x(k)d ,x
(q)d ), (2)
where⋃
denotes the vector concatenation operator, whilst
D(x(k)d ,x
(q)d ) denotes the DTW-based distance between
x(k)d ∈ X(k) and x
(q)d ∈ X(q).
This DTW-based approach was shown in [15] to sig-
nificantly outperform the more commonly used Euclidean
distance in the construction of robust dissimilarity vectors.
IV. SIGNATURE MODELLING
In order to construct a writer-independent model, samples
of genuine signatures and skilled forgeries are collected
from a set of so-called guinea pig writers. These writers
are considered representative of the general public, and their
signatures are used for training purposes only.
Given a set of K reference signatures and N labelled
training signatures (that include an equal number of pos-
itive and negative samples) for each of the Ω guinea-
pig writers, the relevant dissimilarity vectors are generated
for each writer by computing z(k,n) for k = {1, 2, . . . ,K}and n = {1, 2, . . . , N}. We henceforth refer to dissimilarity
vectors that represent genuine signatures and forgeries as
being positive and negative respectively. Furthermore, let
Z+ and Z− denote the sets that contain the positive and
negative dissimilarity vectors obtained from all the guinea-
pig writers.
A. Dissimilarity normalisation
The sets Z+ and Z− provide a suitable platform for
obtaining a decision boundary in dissimilarity space. How-
ever, it is advisable to incorporate a preprocessing stage,
in this case dissimilarity normalisation, into the signature
modelling framework. Dissimilarity normalisation aims to
address the following potential issues: (1) Since the feature
vectors obtained during signature acquisition are highly de-
pendent on the handwriting style (e.g. velocity, acceleration
and pressure) of the writer in question, the dissimilarity
sets obtained by considering several different writers may
contain dissimilarity vectors of arbitrary magnitude – and
may therefore be irregularly distributed in dissimilarity
space; (2) Since the negative samples considered for model
construction represent skilled forgeries, a certain degree of
class overlap in dissimilarity space is a distinct possibility.
In order to address these issues, the systems presented
in this paper perform dissimilarity normalisation using the
writer-specific normalisation strategy proposed in [9].
For every writer ω, the D-dimensional statistics
μ(ω) and σ(ω) are determined by considering the
N (ω) = (K2 −K)/2 unique dissimilarity vectors obtained
when every reference signature belonging to writer ω is
compared to every other reference signature belonging to
the same writer as follows,
μ(ω)d =
1
N (ω)
K∑i=1j>i
z(i,j)d , (3)
σ(ω)d =
√√√√√ 1
N (ω) − 1
K∑i=1j>i
(z(i,j)d − μ
(ω)d
)2
. (4)
The normalised dissimilarity vector z̄ is subsequently ob-
tained using a modified logistic function as follows,
z̄ =
D⋃d=1
[1 + exp
(R
(ω)d −
6zd
R(ω)d
)]−1
, (5)
R(ω)d = μ
(ω)d + σ
(ω)d . (6)
This writer-specific normalisation strategy was shown in
[15] to improve overall class separability in dissimilarity
space, thereby significantly increasing verification profi-
ciency.
B. Weighted kernel support vector machines
The primary objective of an SVM-based classifier is to de-
termine the hyperplane in dissimilarity space that maximally
separates the positive and negative classes. This hyperplane
is described by the weight vector w and bias b. When
the two classes are not linearly separable, a mapping φ(z)is often employed in order to transform the input data to
kernel space, where a more effective hyperplane may be
determined. Popular choices for the kernel function
K(z(i), z(j)) = φ(z(i))′φ(z(j)), (7)
where ′ denotes the vector transpose operator, include linear,
polynomial and radial basis function (RBF) kernels. The
SVM-based decision boundary is therefore described by
f(z̄) = w′φ(z̄) + b. (8)
A notable advantage provided by the SVM-based ap-
proach is the fact that one can easily introduce the concept of
feature weighting by simply modifying the kernel function
[11], [12]. For instance, a weighted RBF kernel may be
obtained as follows,
K(z(i), z(j)) = exp
(−γ
D∑d=1
αd(z(i)d − z
(j)d )2
), (9)
where αd denotes the weight associated with the dth feature,
whilst γ > 0 denotes the kernel width. The use of trivial
weights, where αd = 1 for d = 1, 2, . . . , D, corresponds to
the conventional RBF kernel.
In this paper we consider two fundamentally different
feature weighting strategies, namely the F -score and the
linear SVM weighting methods, as discussed in [11]. In
436
the former strategy, the weight associated with each feature
equals its inter-to-intra-class-variability-ratio. In the latter
strategy, the positive and negative classes are first used to
train a conventional linear SVM. The systems presented
in this paper employ the sequential minimal optimisation
(SMO) algorithm [16] for SVM training. The feature weights
used for the subsequent weighted RBF kernel are given by
α = wLIN, where wLIN denotes the weight vector obtained
for the linear SVM.
V. VERIFICATION
Following the acquisition of a questioned signature,
claimed to belong to writer ω, the relevant feature set is
constructed as discussed in Section III-A. This feature set is
then compared to that of each of the K reference signatures
belonging to writer ω, in order to produce a set of normalised
DTW-based dissimilarity vectors.
Each normalised dissimilarity vector is subsequently pre-
sented to the trained SVM, yielding a signed distance mea-
sure relative to the corresponding decision boundary. The
logistic function is used to convert each distance measure
into a partial confidence score s ∈ [0, 1]. The set of Kpartial confidence scores is then averaged, yielding the final
confidence score s∗ as follows,
s∗ =1
K
K∑k=1
[1 + exp
(−f(z̄
(ω)(q,k))
)]−1
. (10)
Finally, a threshold τ ∈ [0, 1] is imposed on s∗, such that
the questioned signature is accepted as genuine if and only
if s∗ ≥ τ .
VI. EXPERIMENTS
A. Data
System evaluation is performed using the well-known
Philips signature database [4]. This data set contains
1530 genuine signatures and 3000 amateur skilled forgeries
obtained from 51 writers. These forgeries may be sub-
categorised as either home-improved (1530 samples) or over-
the-shoulder (1470 samples). The home-improved forgeries
were produced by forgers who had in their possession an
off-line sample (i.e. paper copy) of the genuine signature, as
well as ample time to practice its reproduction. The over-the-
shoulder forgeries were produced by forgers who witnessed
a legitimate signing event and then attempted to reproduce
the signature immediately afterwards.
B. Protocol
In order to ensure that data from different writers are
used for model training and evaluation, the data set is
partitioned into two disjoint subsets prior to experimentation.
This strategy avoids potential model overfitting and therefore
produces an unbiased system performance estimate. These
partitions, referred to as the training set and evaluation
set, contain the signatures of 34 writers and 17 writers
respectively.
During model training, only the training set is considered.
For every writer, 15 genuine signatures are reserved as a ref-
erence set, whilst 15 genuine signatures and 15 forgeries are
included into the training data. These signatures are used to
obtain a total of 225 positive and 225 negative dissimilarity
vectors. The entire set of positive and negative dissimilarity
vectors, obtained from all 34 writers in the training set, is
used to determine the optimal decision boundary, which is
retained for subsequent verification. Note that, although it
is not reasonable to assume that 15 reference samples will
be available for every writer enrolled into the system during
deployment, it is entirely reasonable to consider 15 reference
samples during training, since these samples are collected
within a controlled environment and are used for training
purposes only.
During system evaluation, only the evaluation set is
considered. For every writer, K genuine signatures are
reserved as a reference set, whilst 30−K genuine signatures
and 60 forgeries are considered for verification. The entire
set of genuine signatures and forgeries, obtained from all
17 writers in the evaluation set, is used to gauge system
performance.
Since only 17 of the 51 writers are considered for
evaluation, the protocol utilised in this paper employs 3-
fold cross validation and 10-fold data randomisation, which
proceeds as follows: (1) The data set is partitioned into
three equal subsets, each containing the signatures from 17
writers; (2) Each subset, in turn, is used as an evaluation set,
whilst signatures from the remaining 34 writers constitute
the training set; (3) The order of the writers is randomised
and the process is repeated. The reported results therefore
represent the average performance for 30 independent trials.
C. Results
We evaluate our SVM-based verification protocol for
several kernel function configurations. These include the
linear (LIN), radial basis function (RBF), F -score weighted
RBF (FS-WRBF) and linear support vector weighted RBF
(LSV-WRBF) variants. The performance metric μ(K)EER, which
denotes the average equal error (EER) rate achieved for a
specific value of K, is presented in Table I.
It is clear from Table I that the linear kernel is signif-
icantly outperformed by its RBF-based counterparts. It is
also clear that there exists a strong correlation between the
reference set size K and system performance. This is an
expected result, since the number of reference signatures
available per writer plays a key role during several phases
of model construction. Finally, it becomes apparent that the
weighted kernels prove superior to the conventional RBF
kernel. Although this superiority may not be significant, it
is undoubtedly consistent.
437
Table IAVERAGE EERS (%) OBTAINED WHEN THE PHILIPS EVALUATION SET IS CONSIDERED.
μ(K)
EER
KAVE
3 5 7 9 11 13 15
LIN 8.16 7.08 6.13 5.82 5.17 4.78 4.52 5.95
RBF 4.55 3.67 2.62 2.38 1.89 1.47 1.29 2.55
FS-WRBF 4.42 3.54 2.60 2.26 1.77 1.37 1.26 2.46
LSV-WRBF 4.34 3.52 2.59 2.23 1.74 1.36 1.26 2.44
AVE 5.37 4.46 3.49 3.17 2.64 2.24 2.08 3.35
� � � � � � � �
�
�
� �
�
� �� � � � � � � � � � � � � � �
� � � � � � �
� ��� ��� �
! "# �$%&
(b)
� � � � � � � �
�
�
� �
�
� �' � � � � � ( � ) ) � � � * � � � � � � � � � � � � � �
� � � � � � �
� ��� ��� �
! "# �$%&
(c)
Figure 2. (a)–(b) Average feature weights considered by the FS-WRBF and LSV-WRBF kernels respectively. The feature indices correspond to the featureset column numbers defined in (1), namely X = [p, x, y, vx, vy , ax, ay , θx, θy ].
We find that, although two fundamentally different
weighting strategies are considered, there is no clear dis-
tinction between the performances yielded by the differ-
ently weighted RBF kernels. This is an interesting result,
especially when one analyses the different feature weights
considered by the two strategies, as illustrated in Figure 2.
Although both strategies recognise e.g. y, vx and vy as
highly discriminant features, there is a notable difference
in the weights assigned to e.g. x and ay . Another point
of interest is raised when one considers the fact that it is
reported in [4] that both of the pen-tilt features are amongst
the most discriminant features in their system. This assertion
is in stark contrast with the evidence presented in Figure 2,
where θx and θy rank amongst the least discriminant features
when either strategy is considered.
We compare the results reported in this paper to those
reported in the literature for systems also evaluated on the
Philips database. These include the systems presented in [4]–
[7], as discussed in Section I. The reader is reminded that,
although these systems were evaluated on the same data set,
there are slight differences in the experimental protocols
considered for evaluation. Nevertheless, the experimental
conditions are similar enough to perform a sensible com-
parison. This comparison is presented in Table II.
From Table II we observe that, for K = 15, both WRBF-
based systems outperform the system presented in [4], whilst
they are marginally outperformed by the systems presented
Table IIEERS (%) REPORTED FOR SELECTED EXISTING SYSTEMS EVALUATED
ON THE PHILIPS DATABASE.
SystemEER (%)
K = 5 K = 15[4] (1998) - 1.90[5] (2000) - 1.02[6] (2004) 3.54 0.95[7] (2007) 3.25 -FS-WRBF 3.54 1.26
LSV-WRBF 3.52 1.26
in [5] and [6]. For K = 5, there is no clear distinction
between the performance of our systems and that of [6],
whilst they are marginally outperformed by the system
presented in [7].
This is a promising result, considering the comparatively
rudimentary feature set considered by the systems presented
in this paper. It is reasonable to assume that the development
of a more sophisticated feature extraction process should
almost certainly improve system performance. Furthermore,
if pen-tilt features do indeed contain valuable information, as
asserted in [4], it is reasonable to believe that the utilisation
of an alternative feature weighting strategy, specifically one
that successfully quantifies the significance of the pen-tilt
features, could potentially improve system performance.
438
VII. CONCLUSION
In this paper we demonstrated that: (1) A DTW-based
algorithm is able to successfully convert a writer-dependent
on-line signature feature set into a writer-independent dis-
similarity vector, provided that a genuine reference sam-
ple is available for comparison; (2) The incorporation of
feature weights into the SVM kernel consistently results
in superior verification performance; (3) The utilisation of
both aforementioned techniques produces a novel writer-
independent on-line signature verification system of which
the performance compares favourably with that of existing
systems evaluated on the same database. As a proof of
concept, the systems presented in this paper are therefore
deemed successful.
Furthermore, additional topics that warrant further inves-
tigation are identified. Firstly, the investigation of alternative
feature weighting strategies is deemed warranted. Candidate
strategies include linear discriminant analysis (LDA), as
employed in [4], the receiver operating characteristic (ROC)
based approach proposed in [12], as well as the FSDD
feature ranking algorithm proposed in [17]. Also, a large
number of additional features may be derived from the orig-
inally captured signature data. Such an expanded feature set
should be well suited for use in a feature weighted modelling
framework, since the role of relatively less discriminating
features are minimised during model construction.
The incorporation of alternative feature weighting strate-
gies, into an expanded feature set, is currently under inves-
tigation.
REFERENCES
[1] D. Impedovo and G. Pirlo, “Automatic signature verification:the state of the art,” IEEE Transactions on Systems, Man, andCybernetics, Part C: Applications and Reviews, vol. 38, no. 5,pp. 609–635, 2008.
[2] I. El-Henawy, M. Rashad, O. Nomir, and K. Ahmed, “Onlinesignature verification: State of the art,” International Journalof Computers & Technology, vol. 4, no. 2, pp. 664–678, 2013.
[3] R. Plamondon and G. Lorette, “Automatic signature verifica-tion and writer identification – the state of the art,” Patternrecognition, vol. 22, no. 2, pp. 107–131, 1989.
[4] J. Dolfing, E. Aarts, and J. van Oosterhout, “On-line signa-ture verification with hidden Markov models,” InternationalConference on Pattern Recognition, vol. 2, pp. 1309–1312,1998.
[5] P. Le Riche, “Handwritten signature verification: a hiddenMarkov model approach,” Master’s thesis, Stellenbosch Uni-versity, 2000.
[6] B. Van, S. Garcia-Salicetti, and B. Dorizzi, “Fusion of HMMslikelihood and Viterbi path for on-line signature verification,”in Biometric Authentication. Springer, 2004, pp. 318–331.
[7] ——, “On using the Viterbi path along with HMM likelihoodinformation for online signature verification,” IEEE Transac-tions on Systems, Man, and Cybernetics, Part B: Cybernetics,vol. 37, no. 5, pp. 1237–1247, 2007.
[8] L. Rabiner and C. Schmidt, “Application of dynamic timewarping to connected digit recognition,” IEEE Transactionson Acoustics, Speech and Signal Processing, vol. 28, no. 4,pp. 377–388, 1980.
[9] J. Swanepoel and J. Coetzer, “Writer-specific dissimilaritynormalisation for improved writer-independent off-line sig-nature verification,” International Conference on Frontiers inHandwriting Recognition, pp. 391–396, 2012.
[10] V. Vapnik, The Nature of Statistical Learning Theory.Springer-Verlag, 1995.
[11] Y. Chang and C. Lin, “Feature ranking using linear SVM,”Causation and Prediction Challenge: Challenges in MachineLearning, vol. 2, pp. 47–57, 2008.
[12] S. Zhang, M. Maruf Hossain, M. Rafiul Hassan, J. Bailey, andK. Ramamohanarao, “Feature weighted SVMs using receiveroperating characteristics,” SIAM International Conference onData Mining, pp. 497–508, 2009.
[13] N. Houmani, S. Garcia-Salicetti, and B. Dorizzi, “On measur-ing forgery quality in online signatures,” Pattern Recognition,vol. 45, no. 3, pp. 1004–1018, 2012.
[14] H. Ketabdar, J. Richiardi, and A. Drygajlo, “Global featureselection for on-line signature verification,” IGS Conference,2005.
[15] J. Swanepoel and J. Coetzer, “A robust dissimilarity repre-sentation for writer-independent signature modelling,” IETBiometrics, vol. 2, no. 4, pp. 159–168, 2013.
[16] J. Platt, “Sequential minimal optimization: A fast algorithmfor training support vector machines,” Microsoft Research,1998.
[17] J. Liang, S. Yang, and A. Winstanley, “Invariant optimalfeature selection: A distance discriminant and feature rankingbased solution,” Pattern Recognition, vol. 41, no. 5, pp. 1429–1439, 2008.
439