[ieee 2008 first workshops on image processing theory, tools and applications (ipta) - sousse,...

8
AbstractThis paper describes the design and development of a prototype of robust biometric system for personnel verification. The system uses features extracted using Scale Invariant Feature Transform (SIFT) operator of human hand. The hand image for features is acquired using a low cost scanner. The palmprint region extracted is robust to hand translation and rotation on the scanner. The use of SIFT operator for feature extractions makes the system robust to scale or spatial resolution of the hand images acquired. The system is tested on IITK database of 200 images and PolyU database of 7751 images. The design of the system with low cost scanner as input device, robustness to translation, rotation and spatial resolution, and testing performance, FAR 0.02%, FRR 0.62%, and accuracy 99.67% suggests that the system can be used for civilian applications and high-security environments. I. INTRODUCTION IOMETRICS has received an increased research interest and considered to be one of the most important and reliable methods in the field of information security [21]. Increasing requirement for security has led to rapid development of intelligent personal identification systems based on biometrics and have found its applications widely in commercial and law enforcement applications. Biometrics establishes identity of a person by physiological and/or behavioral characteristics. Fingerprint is the most widely used biometric feature and iris is the most reliable biometric feature [22]. Using human hand as biometric feature is relatively new approach. Human hand contains many unique features in the palm region, which is the inner surface of a hand between the wrist and the fingers that can be used for personal verification. A palmprint region has useful features like principle lines, wrinkles, ridges, minutiae points, singular points, and texture pattern for its representation [20]. Some of the advantages of a palmprint based biometric systems can be given below. It is a relatively simple method that can use low resolution images and provides high efficiency. Cooperation of users for data acquisition is not very much stringent. The features of the human hand are relatively stable and the hand image from which they are extracted can be acquired relatively easily. Furthermore, it Manuscript received Aug 28, 2008. This work is supported in part by the Ministry of Communication and Information Technology, India. Badrinath G. S. is with the Department of Computer Science and Engineering, Indian Institute of Technology Kanpur, India. (E-mail: [email protected] , [email protected] ). Phalguni Gupta is with the Department of Computer Science and Engineering, Indian Institute of Technology Kanpur, India. (E-mail: [email protected] , [email protected] ). has been reported [23] that identification systems based on hand features are the most acceptable to users. This paper proposes a novel method for palmprint matching using Scale Invariant Feature Transform (SIFT) as a means of key point extraction and number of matching points carried out using a Euclidian distance metric. System parameters are optimized to give lowest equal error rates (EER) on two data sets. The system is designed to be robust to translation, rotation and spatial resolution, and highly accurate, at reasonable price so that it is suitable for civilian applications and high end security applications. Nevertheless, limited work has been reported on palmprint identification and verification, despite the importance of palmprint features. Some of the recent research efforts [1-3] address the problem of palmprint recognition in large databases, and achieve very low error rates. Zhang and Shu [6] proposed a palmprint verification system based on datum points, invariance and line feature matching. Duta et. al. [8] presented an authentication method based on the deformable matching of hand shapes. Palmprints were created using inked palms. The verification decision is based on shape distance, which is automatically computed during the alignment stage. Jain and Ross [12] describe an approach of multimodal biometric verification system based on face, fingerprint, and hand-geometry features that uses fusion at the matching score level. In [2], Zhang et. al. extract central area from a palmprint image for feature extraction, and use 2-D Gabor filtering and propose a phase-coding scheme to represent the palmprint feature. Hamming distance is used as a similarity metric. In [4], palmprint identification scheme using a set of statistical signatures is presented. Set of statistical signatures, which includes density, spatial dispersivity, center of gravity, and texture energy, is used to characterize the palmprint. The signatures are extracted on wavelet transformed palmprint image. Classification and identification scheme based on these signatures is subsequently developed and satisfactory results are promised. Han et. al. [7] described a personal authentication system based on palm-print features extracted from scanner based palmprint images. The palm-print features are extracted from the so-called region of interest (ROI). The feature vectors are derived from the ROI using three different grid sizes 32 X 32, 16 X 16 and 8 X 8. Mean value of the grid is the considered as feature value. Two techniques are designed for the Palmprint Verification using SIFT features Badrinath G. S. and Phalguni Gupta Indian Institute of Technology Kanpur, India B Image Processing Theory, Tools & Applications 978-1-4244-3322-3/08/$25.00 ©2008 IEEE

Upload: phalguni

Post on 24-Mar-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: [IEEE 2008 First Workshops on Image Processing Theory, Tools and Applications (IPTA) - Sousse, Tunisia (2008.11.23-2008.11.26)] 2008 First Workshops on Image Processing Theory, Tools

Abstract—This paper describes the design and development of a prototype of robust biometric system for personnel verification. The system uses features extracted using Scale Invariant Feature Transform (SIFT) operator of human hand. The hand image for features is acquired using a low cost scanner. The palmprint region extracted is robust to hand translation and rotation on the scanner. The use of SIFT operator for feature extractions makes the system robust to scale or spatial resolution of the hand images acquired. The system is tested on IITK database of 200 images and PolyU database of 7751 images. The design of the system with low cost scanner as input device, robustness to translation, rotation and spatial resolution, and testing performance, FAR 0.02%, FRR 0.62%, and accuracy 99.67% suggests that the system can be used for civilian applications and high-security environments.

I. INTRODUCTION IOMETRICS has received an increased research interest and considered to be one of the most important and

reliable methods in the field of information security [21]. Increasing requirement for security has led to rapid development of intelligent personal identification systems based on biometrics and have found its applications widely in commercial and law enforcement applications. Biometrics establishes identity of a person by physiological and/or behavioral characteristics. Fingerprint is the most widely used biometric feature and iris is the most reliable biometric feature [22]. Using human hand as biometric feature is relatively new approach. Human hand contains many unique features in the palm region, which is the inner surface of a hand between the wrist and the fingers that can be used for personal verification. A palmprint region has useful features like principle lines, wrinkles, ridges, minutiae points, singular points, and texture pattern for its representation [20]. Some of the advantages of a palmprint based biometric systems can be given below. It is a relatively simple method that can use low resolution images and provides high efficiency. Cooperation of users for data acquisition is not very much stringent. The features of the human hand are relatively stable and the hand image from which they are extracted can be acquired relatively easily. Furthermore, it

Manuscript received Aug 28, 2008. This work is supported in part by the

Ministry of Communication and Information Technology, India. Badrinath G. S. is with the Department of Computer Science and

Engineering, Indian Institute of Technology Kanpur, India. (E-mail: [email protected], [email protected] ).

Phalguni Gupta is with the Department of Computer Science and Engineering, Indian Institute of Technology Kanpur, India. (E-mail: [email protected], [email protected] ).

has been reported [23] that identification systems based on hand features are the most acceptable to users. This paper proposes a novel method for palmprint matching using Scale Invariant Feature Transform (SIFT) as a means of key point extraction and number of matching points carried out using a Euclidian distance metric. System parameters are optimized to give lowest equal error rates (EER) on two data sets. The system is designed to be robust to translation, rotation and spatial resolution, and highly accurate, at reasonable price so that it is suitable for civilian applications and high end security applications. Nevertheless, limited work has been reported on palmprint identification and verification, despite the importance of palmprint features. Some of the recent research efforts [1-3] address the problem of palmprint recognition in large databases, and achieve very low error rates. Zhang and Shu [6] proposed a palmprint verification system based on datum points, invariance and line feature matching. Duta et. al. [8] presented an authentication method based on the deformable matching of hand shapes. Palmprints were created using inked palms. The verification decision is based on shape distance, which is automatically computed during the alignment stage. Jain and Ross [12] describe an approach of multimodal biometric verification system based on face, fingerprint, and hand-geometry features that uses fusion at the matching score level. In [2], Zhang et. al. extract central area from a palmprint image for feature extraction, and use 2-D Gabor filtering and propose a phase-coding scheme to represent the palmprint feature. Hamming distance is used as a similarity metric. In [4], palmprint identification scheme using a set of statistical signatures is presented. Set of statistical signatures, which includes density, spatial dispersivity, center of gravity, and texture energy, is used to characterize the palmprint. The signatures are extracted on wavelet transformed palmprint image. Classification and identification scheme based on these signatures is subsequently developed and satisfactory results are promised. Han et. al. [7] described a personal authentication system based on palm-print features extracted from scanner based palmprint images. The palm-print features are extracted from the so-called region of interest (ROI). The feature vectors are derived from the ROI using three different grid sizes 32 X 32, 16 X 16 and 8 X 8. Mean value of the grid is the considered as feature value. Two techniques are designed for the

Palmprint Verification using SIFT features Badrinath G. S. and Phalguni Gupta

Indian Institute of Technology Kanpur, India

B

Image Processing Theory, Tools & Applications

978-1-4244-3322-3/08/$25.00 ©2008 IEEE

Page 2: [IEEE 2008 First Workshops on Image Processing Theory, Tools and Applications (IPTA) - Sousse, Tunisia (2008.11.23-2008.11.26)] 2008 First Workshops on Image Processing Theory, Tools

verification: the multiple-template matching method and the back-propagation neural network method. In [3] Kumar and Zhang, extracts both hand geometry and palmprint features. Feature fusion algorithm is used to test different classification schemes. Discrete cosine transform is used to extract features. Palmprint patterns consist of texture, lines, and wrinkles. These patterns are used individually for classification. The features are evaluated on the diverse classification schemes: naive Bayes, decision trees, k-NN, SVM, and FFN. In [1] Ribaric et. al. have proposed a system fusing the features extracted from palm using eigenpalm and fingers using eigenfingers. Features are fused at the matching-score level. System achieves an Equal Error Rate (EER) of 0.58% on a database of 1820 hand images of 237 different classes. X.J. Wang [5] proposed a palmprint identification approach using boosted local binary pattern histograms, to represent the local features of a palmprint image. In the method, AdaBoosting algorithm was used to select sub-windows for more discriminative classification. Systems using ink marking to capture the palmprint patterns have been presented [6], [8], and due to considerable attention and high cooperation is required in providing a biometric sample, system is not widely accepted. In recent papers on palmprint recognition uses palmprint images captured from a digital camera [9] and palmprint images are accepted in a constrained environment using pegs. In this paper, palmprint verification is done on images obtained using low cost flat bed scanner, and images are accepted in constraints free environment without pegs. In earlier paper [11] Badrinath et. al. proposed multialgorithmic fusion of palmprint features extracted using Eigen palm and Haar wavelet at matching score level. Hamming distance is used as a similarity metric for computing matching scores from individual recognizers. Finally sum rule is used for fusing the matching scores. System is evaluated using IITK and PolyU datasets, satisfactory results are promised. Palmprint features can be extracted using various transforms such as Fourier Transform [10] and Discrete Cosine Transform [3], Wavelet transform [2, 11], Independent component Analysis [15], Fisher Discriminant Analysis [16], Karhunen-Loeve transform [1, 11]. In this paper, SIFT [14] is used to extract the feature, which describes the local features of the image. SIFT feature generation process is described in the following section. The rest of the paper is organized as follows: Section II explains the method of feature extraction. In the next section the proposed system, and palmprint extraction is described. Section IV describes palmprint matching using Euclidean distance and verification based on threshold. The experimental results for personnel verification are reported in

Section V. Finally, the conclusions are presented in Section VI.

II. SCALE INVARIANT FEATURE TRANSFORM The Scale Invariant Feature Transform [14] is recently

emerged cutting edge methodology for pattern recognition, and has been used in general object recognition and for other machine vision applications [17, 18, 19, 24]. SIFT has been designed for extracting highly distinctive invariant features from images. Features to be very much distinctive and stable, key-points are efficiently detected through a staged filtering in scale space. Feature vectors are formed by means of local patterns around keypoints from scale space decomposed image. Extracted feature vectors are called SIFT keys. The extracted features are invariant to image scaling, rotation, and translation, and partially invariant to illumination changes, addition of noise and affine distortion. Thus feature can be matched correctly with high probability against features from a large database of images. The cost of extracting these features is minimized by taking a cascade filtering approach, in which the more expensive operations are applied only at locations that pass an initial test.

Major stages of computation to generate the image features

are following:

Fig. 1. The Gaussian convolved images at different scales, and the computation of the Difference-of-Gaussian images (from [14]).

A. Scale-space extrema detection: The first stage of keypoint identification is to searche over all scales and image locations that can be repeatedly assigned under different viewing of the same image. It is implemented efficiently by using a Difference-of-Gaussian (DoG) function to identify potential keypoints that are invariant to scale and orientation. The scale space of the input image defined by function ),,( σyxL , is formed by convolving (filtering) the original image with variable scale Gaussian ),,( σyxG :

Page 3: [IEEE 2008 First Workshops on Image Processing Theory, Tools and Applications (IPTA) - Sousse, Tunisia (2008.11.23-2008.11.26)] 2008 First Workshops on Image Processing Theory, Tools

),(*),,(),,( yxIyxGyxL σσ = (1)

where ),( yxI is the given image and ∗ is the convolution operation at x and y and

2

22

221),,( σ

πσσ

yx

eyxG+−

= (2)

To detect stable and distinct keypoint locations in scale space, the method proposed in [14] is used. It makes use of the scale space local extrema in the DoG function ),,( σyxD , convolved with image, which is computed from the difference of two adjacent scales separated by a constant multiplicative factor k , (shown in Fig. 1):

),(*)),,(),,((),,( yxIyxGkyxGyxD σσσ −= (3)

),,(),,( σσ yxLkyxL −=

Interest points (candidate keypoints) are the local maxima or minima of the DoG images. To detect these extrema points each pixel is compared to its 8 neighbours in the current image and 18 neighbours in the adjacent scale images, as shown in Fig. 2.

Fig. 2. Local maxima and minima detection in difference-of-Gaussian images by comparing a pixel (marked with X) to its 26 neighbours in 3x3 regions (from [14])

B. Accurate Key Point localisation This stage eliminates some points from the candidate list of

keypoints by finding those keypoints that have low contrast or the poorly localised on an edge. Low contrast points are sensitive to noise. From the keypoint candidates found by comparing a pixel to its neighbours, a detailed fit using Taylor expansion to determine the interpolated location of the maximum. This eliminates low contrast points.

The fact of principle curvature across the edge is used to eliminate poorly localised extrema on an edge. The extrema on the edge has large principle curvature across the edge but small curvature in the perpendicular direction in the DoG. A 22× hessian matrix, H computed at the location and scale of the keypoint to determine the curvature.

⎥⎥⎦

⎢⎢⎣

⎡=

yyxy

xyxx

DD

DDH (5)

where the subscripts of D denote the respective partial derivatives of ),,( σyxD . If the one Eigen value of H is significantly larger than other, then the extrema point is an edge, hence rejected.

C. Orientation Assignment This step aims to assign a consistent orientation to the

keypoints based on local image properties. Hence it makes the keypoint descriptor rotation invariant. To determine the key point orientation a gradient orientation histogram is computed in the neighbouring hood of the key point. The gradient magnitude and orientation is computed using

2

2

))1,()1,(()),1(),1((

),(−−++

−−+=

yxLyxLyxLyxL

yxm (5)

⎟⎟⎠

⎞⎜⎜⎝

⎛−−+

−−+= −

),1(),1()1,()1,(tan),( 1

yxLyxLyxLyxLyxθ (6)

Each pixel or sample in the neighbourhood is weighted by its gradient magnitude and by a Gaussian-weighted circular window with σ that is 1.5 times the scale of key-point. This has the effect of giving a higher weight to samples near the center of the window. Peaks in the histogram correspond to dominant orientations of local gradients. Highest peak in the histogram is used as the orientation of the keypoint, and any other peak within 80% of the height of this peak will create a new keypoint with that orientation. All the properties of the keypoint are measured relative to the keypoint orientation, hence the properties are invariance to image rotation.

D. The Key-point Descriptor In this step feature vector of 128 values is computed for the local image region at each keypoint that is uniquely identified. The local image gradients are measured at the selected scale in the region around each keypoint. These values are represented as small arrows at each sample around keypoints. A Gaussian

Page 4: [IEEE 2008 First Workshops on Image Processing Theory, Tools and Applications (IPTA) - Sousse, Tunisia (2008.11.23-2008.11.26)] 2008 First Workshops on Image Processing Theory, Tools

Fig. 3: Block diagram of the steps involved in building an robust palmprint based biometric system for personnel verification

weighting function with σ related to scale of the keypoint is used to assign a weight to gradient magnitude. Once the keypoint is orientation is selected the feature descriptor is computed by creating orientation histograms over 44× neighbourhood. Each descriptor contains array of

1644 =× histograms around, and each histogram contains 8 bins. So the descriptor contains 128844 =×× elements as the SIFT feature vector (Fig. 4. Shows descriptor array of

422 =× histograms, each histogram of 8 orientations producing 32822 =×× elements) .

Fig 4. The gradient magnitude and orientation at a sample point in a square region around the keypoint location. These are weighted by a Gaussian window, indicated by the overlaid circle. Right: The image gradients are added to an orientation histogram. Each histogram include 8 directions indicated by the arrows and is computed from 4x4 subregions. The length of each arrow corresponds to the sum of the gradient magnitudes near that direction within the region. This figure shows a 2x2 descriptor array computed from an 8x8 set of samples, produces 2x2x8 = 32 elements for this example. (From [14])

III. PROPOSED SYSTEM

The block diagram of the proposed robust biometric system for personnel verification using SIFT features of palmprint is shown in Fig. 3. The process starts with acquiring the input image of hand using low cost scanner. In the region of interest module image is pre-processed using some standard image enhancement procedures, normalised, and region of interest is extracted. In the following module, features are extracted using SIFT and in subsequent modules matching between feature vectors of live image and enrolled image in database is computed using Euclidian distance metric. Matching decision is done based on threshold. The detailed description of steps involved in system is given as follows.

A. Image Acquisition Hand images are obtained by a flatbed scanner in grayscale at a spatial resolution of 200 dots per inch, which needs 6 seconds. The shutter of the scanner is not closed to avoid the non-uniform background illumination/reflection. The device is constraints (pegs) free, so placing hand is rotation independent relative to line of symmetry of the working surface of scanner. Typical gray level image obtained from the scanner is shown in Fig. 5a.

Fig. 5a. Scanned Image Fig. 5b. Binarised Image

B. Preprocessing and Region of Interest Extraction In this phase hand image is pre-processed and palmprint region is extracted. Due to the regular and controllable uniform background illumination condition during image capturing, and contrasting colour of hand image and background, global thresholding has been applied to extract the hand from the background. Fig. 5b shows a binarized image from Fig. 5a. Image is preprocessed using image enhancement procedures and processed with morphological operations to remove any isolated small blobs or holes. Applying contour-tracing algorithm [13], the contour of the hand image is extracted. The extracted contour of the hand can be seen in Fig. 6a. Based on the contour of the hand, the fingertips and the valleys )2,1( VV between two fingers are determined as shown in Fig 6a. The two reference points are selected:

1. The valley between the little finger and the ring finger (point 1V ), and

Verify Image Acquisition

Extract Region of Interest

(Palmprint)

SIFT Feature Extraction Threshold

Database

Matching Points

Accepted/Rejected

Register/Verify

Register

Page 5: [IEEE 2008 First Workshops on Image Processing Theory, Tools and Applications (IPTA) - Sousse, Tunisia (2008.11.23-2008.11.26)] 2008 First Workshops on Image Processing Theory, Tools

2. The valley between the index finger and the middle finger (point 2V ).

Fig. 6a: Hand Contour and reference point

Fig 6b: Relevant points and Region of interest (palmprint)

Region of interest (palmprint) is the square area as shown in Fig. 6b, with two of its corners placed on the middle points of the line segments C1-V1 and V2-C2. The line segments C1-V1 and V2-C2 are inclined at an angle of 45 and 60 degrees respectively to the line joining V1 and V2.

Fig. 7a. Region of Interest in Gray scale image

Fig. 7b. Extracted Region of Interest (palmprint)

The extracted palmprint shown in Fig. 7b of the original gray scale images vary in size from subject to subject, because the area of the palm is different for each subject. Placement of palm on the scanner is pegs free, so orientation of placement would vary for every incident. Two images of subject with different orientations of placement are shown in Fig. 8. The extracted region of interest is relative to the valley points

1V and 2V which are stable for the subject. So the extracted region from different incidents of the same subject remains the same as shown in Fig. 9 (corresponding hand image is shown in Fig. 8), independent to orientation of placement of palm relative to symmetry line of working surface of scanner. Hence the proposed extraction procedure of the system for extracting region of interest makes the system robust to rotation. The empirical results show that the system is robust to rotation for about ±35 degrees. SIFT approach is used to extract features and it is scale invariant, so images need not to be normalized to exactly the same size.

Fig. 8. Images of same subject with different orientation of placement relative to the symmetry (Yellow Line) of work surface.

C. Palmprint Image Feature Extraction After hand image pre-processing, and region of interest (palmprint) extraction, feature should be extracted for later recognition. SIFT is used to extract a feature vector which provides good discrimination ability and has been used for palmprint based personnel verification.

Fig. 9. Extracted region of interest for images shown in Fig. 8.

The detected SIFT keypoints for the palmprint image is shown in Fig. 10.

Fig. 10. Keypoints of the Palmprint.

IV. MATCHING In this section the matching strategy followed to compare live image descriptor against w image descriptors of the claiming user collected during enrollment for the repeatability and stability of the SIFT featurses in both the

Page 6: [IEEE 2008 First Workshops on Image Processing Theory, Tools and Applications (IPTA) - Sousse, Tunisia (2008.11.23-2008.11.26)] 2008 First Workshops on Image Processing Theory, Tools

images has been discussed. To authenticate a palmprint, the SIFT features of each keypoint computed in the live image is matched to every keypoint of the enrolled image in the database.

Given two images Q and E, representing the live image and the enrolled image from the database respectively. Let it’s computed feature vector arrays be given by:

},,,{ 321 mqqqqQ …= (7)

},,,{ 321 neeeeE …= (8)

The key points iq of Q and je of E are are assumed to be

matched if the Euclidean distance ji eq − between them is

less than a specified threshold T.

⎪⎩

⎪⎨⎧ <−

=Otherwise

TeqifeqIsMP ji

ji0

1),( (9)

The matching points ),( ji eq with minimum distance are removed. The matching process is continued until no matching points are found. Based on the number of matching points between live image Q and enrolled image E, the user Q is considered to be accepted or not.

Fig. 11. Matching points from two palmprint images of same subject. Number of matching points represents the similarity

between the palmprint images Q and E; the idea is that the greater the number of matching points between two images, the greater is the similarity between them. Two dissimilar palm images fail this test since the number of matching points between them is small.

Matching points on the different palmprint images of the same subject is shown in Fig. 11.

Fig. 12a. PolyU sample image Fig. 12b. Reference points

V. EXPERIMENTAL RESULTS The proposed system is tested on two sets of image databases. They are the Indian Institute of Technology Kanpur (IITK) database consists of images captured using flat bed scanner and user is independent to rotate his hand around

°± 35 symmetric to working surface of scanner, and The Hong Kong Polytechnic University (PolyU) consists of images captured using CCD and user is constrained with pegs [2].

A. IITK Database IITK has collected a database comprised of 200 images from 100 subjects. Each image has been collected at a spatial resolution of 200 dots per inch using a low cost flat bed scanner, and 256 gray levels.

B. PloyU Database The proposed system has also been tested on the database from Hong Kong Polytechnic University (PolyU) [9] which consists of 7752 grayscale images corresponding to386 different palms. Around 17 images per palm are collected in two sessions. The images are collected at spatial resolution of 75 dots per inch, and 256 gray levels using CCD [2]. Images are captured placing pegs. Fig. 12a shows the sample of database.

Fig. 13a. Region of interest in gray scale image

Fig. 13b. Extracted palmprint

The images of PolyU database are not suitable for testing the proposed system. Following method is proposed to extract region of interest for PolyU database. Four reference points P1, P2, P3 and P4 are located on the contour of palm as shown in Fig. 12b. After binarising extract 200 x 200

Page 7: [IEEE 2008 First Workshops on Image Processing Theory, Tools and Applications (IPTA) - Sousse, Tunisia (2008.11.23-2008.11.26)] 2008 First Workshops on Image Processing Theory, Tools

pixels palm area with its center coinciding with intersecting point of line segments P1–P2 and P3 – P4. Fig. 13a shows the region of interest in gray scale image, and the extracted palmprint image is shown in Fig. 13b. The experiment is conducted as follows: One image of user i is selected from the database and matched for false rejection with all other images of the same user i, false rejections are noted. The process is repeated all other images of the user i and false rejections are noted. This process of false rejection is conducted on all 100 users and results are noted. For false acceptance, every image of the user i from the database is compared with remaining all other 99 users images. Process of False acceptance is conducted for all the 100 users and the results are noted. The conducted experiments are more appealing than rank one recognition method because only inter-class (between Test-set and Training-set) comparisons is done for False acceptance and False Rejections. The intra-class (between training-set / testing-set and itself) comparisons are not performed for false acceptance and false rejections.

TABLE I

ACCURACY, FAR AND FRR OF THE PROPOSED SYSTEM AND [11] IITK database PolyU Database

FAR FRR Accuracy FAR FRR Accuracy Eigenpalm [11] 9.65 9.43 90.45 6.44 8.38 92.58 Haarpalm [11] 5.00 13.8 90.57 4.56 12.8 91.28 Fusion [11] 4.73 5.66 94.80 4.17 8.10 93.85 Proposed 0.02 0.62 99.67 3.25 7.88 94.42

The experiment has been performed for both the datasets using SIFT. The Receiver Operating Characteristic (ROC) curves of the proposed system for IITK database and PolyU database is shown in Fig. 15. Table 1 shows how the accuracy, FAR and FRR of the proposed and earlier [11] systems for both IITK and PolyU database. Compared to previous system [11], the accuracy of the proposed system has grown up to 99.67%, and FAR is reduced to 0.02. The experimental results illustrate the proposed system performs better than the earlier known system which fuses the matching scores of the haarwavelet and eigenpalm classifiers.

Fig. 15. ROC curve for IITK and PolyU database

VI. CONCLUSION In this paper a biometric system which is robust to rotation, translation and scale of the palm image has been presented. The extracted palmprint is found to be invariant to orientation and translation of palm on scanner, which makes the system robust to rotation and translation. The features extracted using SIFT are also scale invariant and hence it makes the system to improve its robustness to spatial resolution of image. The recognition accuracy of the system is 99.67%, along with FAR 0.032% and FRR 0.62%, have demonstrated the possibility of using this system in high-security environments.

The proposed system has been developed for the images

obtained from low cost scanner. It is found that the extracted palmprint region is relative to the valleys between the fingers which are stable for the person, so the extracted palmprint region is invariant to rotation of palm on scanner. Hence the proposed system should be robust to rotation and translation of palm on scanner. Since SIFT operator is used to extract the features, the features extracted are invariant to scale of the image, and hence the system is invariant to spatial resolution of input images. Thus the design of the system with robustness, performance, and use of low cost scanner for acquisition of palm image suggests its use for civilian applications and high end security application.

REFERENCES [1] S. Ribaric and I. Fratric, “A biometric identification system based on

eigenpalm and eigenfinger features,” IEEE Trans. on Pattern Analysis Machine Intelligence, vol. 27, no. 11, pp. 1698–1709, Nov. 2005.

[2] D. Zhang, A. W.-K. Kong, J. You, and M. Wong, “Online palmprint identification,” IEEE Transaction on Pattern Analysi and Machine Intelligence, vol. 25, no. 9, pp. 1041–1050, Sep. 2003.

[3] A. Kumar and D. Zhang, “Personal recognition using hand shape and texture,” IEEE Transaction on Image Processing, vol. 15, no. 8, pp. 2454–2461, Aug. 2006.

[4] Lei Zhang and David Zhang, “Characterization of Palmprints by Wavelet Signatures via Directional Context Modeling”, IEEE Transaction on Systems, Man, and Cybernetics, vol. 34, no. 3 pp. 1335-11347, june 2004.

[5] X. Wang H. Gong and H. Zhang, “Palmprint Identification using Boosting Local Binary Pattern,” 18th International Conference on Pattern Recognition, vol. 3, pp. 503-506, Aug. 2006.

[6] D. Zhang and W. Shu, “Two novel characteristics in palmprint verification: Datum point invariance and line feature matching,” Pattern Recognition, vol. 32, no. 4, pp. 691–702, 1999.

[7] Han, C-C., Cheng, H-L., Lin, C-L., and Fan, K-C. “Personal authentication using palmprint features,” Pattern Recognition, vol. 36, 36, pp. 371–381, 2003.

[8] N. Duta, A. Jain, and K. Mardia, “Matching of palmprint,” Pattern Recognition Letters, vol. 23, no. 4, pp. 477–485, 2001.

[9] The PolyU palmprint database [Online]. Available: http://www.comp.polyu.edu.hk/~biometrics.

[10] Wenxin Li,David Zhang and Zhuoqun Xu, "Palmprint Identification by Fourier T ransform,” Intl. Journal of Pattern Recognition and Artificial Intelligence, vol. 16 No.4, pp. 417-432, 2002.

[11] Badrinath G. S. and Phalguni Gupta, “An Efficient Multi-algorithmic Fusion System based on Palmprint for Personnel Identification,” Intl. Conf. on Advanced Computing, pp. 759-764, 2007.

[12] Jain, A.K., Ross, A., and Pankanti, S. “A prototype hand geometrybased verification system,” 2nd Intl. Conf. on Audio- and videobased personal authentication, pp. 166–171, March -1999.

[13] Pavlidis, T. ‘Algorithms for graphics and image processing’ (Springer Verlag, 1982)

Page 8: [IEEE 2008 First Workshops on Image Processing Theory, Tools and Applications (IPTA) - Sousse, Tunisia (2008.11.23-2008.11.26)] 2008 First Workshops on Image Processing Theory, Tools

[14] D. G. Lowe. “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, Vol. 2, pp 91–110, 2004.

[15] Guang-Ming Lu Kuan-Quan Wang Zhang, D. “Wavelet based independent component analysis for palmprint identification,” Intl. conf. on Machine Learning and Cybernetics, pp. 3547- 3550 vol.6, 2004.

[16] Yanxia Wang and Qiuqi Ruan, “Kernel Fisher Discriminant Analysis for Palmprint Recognition,” 18th International Conference on Pattern Recognition, pp. 457 - 460, 2006.

[17] D. Lowe. “Local feature view clustering for 3d object,” IEEE Conf. on Computer Vision and Pattern Recognition, pp. 682–688, 2001.

[18] D. Lowe. “Object recognition from local scale-invariant features,” Int. Conf. on Computer Vision, pp. 1150–1157, 1999.

[19] P. Chakravarty, "Vision-based indoor localization of a motorized wheelchair," Technical Report, Dept. of Electrical and Computer Systems Engineering, Monash University, Australia. 2005

[20] W. Shu and D. Zhang, “Automated Personal Identification by Palmprint,” Optical Engineering, vol. 37, no. 8, pp. 2359-2362, Aug. 1998.

[21] D. Zhang, Automated Biometrics—Technologies and Systems. Boston: Kluwer Academic, 2000.

[22] “Independent Testing of Iris Recognition Technology Final Report,” Int’l Biometric Group, May 2005.

[23] Int’l Committee for Information Technology Standards, Technical Committee M1, Biometrics, http://www.incits.org/tc_home/m1.htm, 2005.

[24] Bicego M. Lagorio A. Grosso E. and Tistarelli M, “On the use of SIFT features for face authentication,” Computer Vision and Pattern Recognition Workshop, pp. 35-41, June 2006.