d. bobeldyk and a. ross, iris or periocular? exploring sex ...rossarun/pubs/...fig. 4: tessellations...

7
Iris or Periocular? Exploring Sex Prediction from Near Infrared Ocular Images Denton Bobeldyk *† and Arun Ross * * Michigan State University, East Lansing, MI 48824 Davenport University, Grand Rapids, MI 49512 [email protected], [email protected] Abstract—Recent research has explored the possibility of auto- matically deducing the sex of an individual based on near infrared (NIR) images of the iris. Previous articles on this topic have extracted and used only the iris region, while most operational iris biometric systems typically acquire the extended ocular region for processing. Therefore, in this work, we investigate the sex- predictive accuracy associated with four different regions: (a) the extended ocular region; (b) the iris-excluded ocular region; (c) the iris-only region and (d) the normalized iris-only region. We employ the BSIF (Binarized Statistical Image Feature) texture operator to extract features from these regions, and train a Support Vector Machine (SVM) to classify the extracted feature set as Male or Female. Experiments on a dataset containing 3314 images suggest that the iris region only provides modest sex-specific cues compared to the surrounding periocular region. This research further underscores the importance of using the periocular region in iris recognition systems. Index Terms—iris, periocular, biometrics, soft biometrics, sex prediction, binarized statistical image feature (BSIF) I. I NTRODUCTION A biometric system uses the physical or behavioral trait of an individual to automatically recognize the individual. The physical or behavioral trait that is used for recognition is referred to as a biometric trait. Examples of biometric traits include face, fingerprint, iris, voice, gait and hand geometry [2]. The focus of this current work is on the iris trait. Iris has been observed to be a powerful biometric cue and is currently being used successfully in several large scale projects (e.g., United Arab of Emirates border crossing system and India’s Aadhaar program). Besides biometric traits, there are other attributes, that may not be unique to an individual, but that can offer useful information about the individual. These are known as soft biometrics. Gender, age, and ethnicity are all examples of soft biometrics [3]. There are several advantages to developing algorithms ca- pable of automatically extracting soft biometric traits. While a single soft biometric trait may not offer sufficient information to uniquely recognize an individual, it can be combined along with primary biometric traits in a fusion framework to improve recognition accuracy [4]. Besides improving the recognition accuracy, these attributes provide additional semantic informa- tion about an unknown subject that bridges the gap between human and machine descriptions of individuals (e.g., “middle- aged male Caucasian”). Soft biometric algorithms can also be used in scenarios where traditional biometric matchers may not be easily used. For example, it may be possible to extract soft biometric information from poor quality images or multispectral images (e.g., visible versus near-infrared) where traditional biometric matchers are likely to fail. Predicting soft biometric attributes accurately has applications in criminal investigations as well. Specifically, they can be used to exclude some suspects from further consideration. In addition to the above advantages, it may be possible to automatically glean aggregate demographic information from a central biometrics database (e.g., age distribution of subjects). Sex 1 prediction from NIR iris is a relatively new problem. 2 Previous research in this area has explored predicting sex only from the iris region; however, most iris systems capture images of the extended ocular region, rather than just the iris (see Figure 1). In the biometrics literature, the region around the eyes is often referred to as the periocular region. Thus we raise the following questions: Does the extended ocular region offer more sex cues than the iris region? Does the normalized iris image 3 result in better sex discrimination than the non-normalized iris image? In this paper we will explore these questions systematically and by designing an experimental protocol that would allow us to draw conclusions for each of them. II. RELATED WORK One of the earliest work on automatically deducing sex from iris was conducted by Thomas et al. [7]. The authors assembled a dataset of 57,137 ocular images. From each image the iris was automatically segmented and transformed into a 20 × 240 image using the rubber sheet transformation model [8]. Then a 1D Gabor filter was used to generate a feature vector. An information gain metric was used to perform feature selection. A decision tree algorithm was then used to classify the reduced feature vector as ‘Male’ or ‘Female’. The 1 We use the term “sex” in this paper, rather than “gender”, since the former has a genetic basis while the latter is assumed to be influenced by social, personal and psychological factors. 2 It should be noted that there is a related publication that explores this problem in the RGB color spectrum by cropping out and using the periocular region from a face image [5]. 3 The normalized iris is a rectangular rendition of the annular iris and is obtained by sampling the segmented iris region in the radial and angular directions using a rubber sheet model [6]. D. Bobeldyk and A. Ross, "Iris or Periocular? Exploring Sex Prediction from Near Infrared Ocular Images," Proc. of the 15th International Conference of the Biometrics Special Interest Group (BIOSIG), (Darmstadt, Germany), September 2016

Upload: others

Post on 11-Aug-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: D. Bobeldyk and A. Ross, Iris or Periocular? Exploring Sex ...rossarun/pubs/...Fig. 4: Tessellations applied to the four image regions. Original image taken from [1]. filters, the

Iris or Periocular? Exploring Sex Prediction fromNear Infrared Ocular Images

Denton Bobeldyk∗† and Arun Ross∗∗Michigan State University, East Lansing, MI 48824†Davenport University, Grand Rapids, MI 49512

[email protected], [email protected]

Abstract—Recent research has explored the possibility of auto-matically deducing the sex of an individual based on near infrared(NIR) images of the iris. Previous articles on this topic haveextracted and used only the iris region, while most operational irisbiometric systems typically acquire the extended ocular regionfor processing. Therefore, in this work, we investigate the sex-predictive accuracy associated with four different regions: (a) theextended ocular region; (b) the iris-excluded ocular region; (c)the iris-only region and (d) the normalized iris-only region. Weemploy the BSIF (Binarized Statistical Image Feature) textureoperator to extract features from these regions, and train aSupport Vector Machine (SVM) to classify the extracted featureset as Male or Female. Experiments on a dataset containing3314 images suggest that the iris region only provides modestsex-specific cues compared to the surrounding periocular region.This research further underscores the importance of using theperiocular region in iris recognition systems.

Index Terms—iris, periocular, biometrics, soft biometrics, sexprediction, binarized statistical image feature (BSIF)

I. INTRODUCTION

A biometric system uses the physical or behavioral trait ofan individual to automatically recognize the individual. Thephysical or behavioral trait that is used for recognition isreferred to as a biometric trait. Examples of biometric traitsinclude face, fingerprint, iris, voice, gait and hand geometry[2]. The focus of this current work is on the iris trait. Iris hasbeen observed to be a powerful biometric cue and is currentlybeing used successfully in several large scale projects (e.g.,United Arab of Emirates border crossing system and India’sAadhaar program).

Besides biometric traits, there are other attributes, that maynot be unique to an individual, but that can offer usefulinformation about the individual. These are known as softbiometrics. Gender, age, and ethnicity are all examples of softbiometrics [3].

There are several advantages to developing algorithms ca-pable of automatically extracting soft biometric traits. While asingle soft biometric trait may not offer sufficient informationto uniquely recognize an individual, it can be combined alongwith primary biometric traits in a fusion framework to improverecognition accuracy [4]. Besides improving the recognitionaccuracy, these attributes provide additional semantic informa-tion about an unknown subject that bridges the gap betweenhuman and machine descriptions of individuals (e.g., “middle-aged male Caucasian”). Soft biometric algorithms can alsobe used in scenarios where traditional biometric matchers

may not be easily used. For example, it may be possible toextract soft biometric information from poor quality images ormultispectral images (e.g., visible versus near-infrared) wheretraditional biometric matchers are likely to fail. Predictingsoft biometric attributes accurately has applications in criminalinvestigations as well. Specifically, they can be used to excludesome suspects from further consideration. In addition to theabove advantages, it may be possible to automatically gleanaggregate demographic information from a central biometricsdatabase (e.g., age distribution of subjects).

Sex1prediction from NIR iris is a relatively new problem.2

Previous research in this area has explored predicting sex onlyfrom the iris region; however, most iris systems capture imagesof the extended ocular region, rather than just the iris (seeFigure 1). In the biometrics literature, the region around theeyes is often referred to as the periocular region. Thus weraise the following questions:

• Does the extended ocular region offer more sex cues thanthe iris region?

• Does the normalized iris image3 result in better sexdiscrimination than the non-normalized iris image?

In this paper we will explore these questions systematicallyand by designing an experimental protocol that would allowus to draw conclusions for each of them.

II. RELATED WORK

One of the earliest work on automatically deducing sexfrom iris was conducted by Thomas et al. [7]. The authorsassembled a dataset of 57,137 ocular images. From eachimage the iris was automatically segmented and transformedinto a 20 × 240 image using the rubber sheet transformationmodel [8]. Then a 1D Gabor filter was used to generate afeature vector. An information gain metric was used to performfeature selection. A decision tree algorithm was then used toclassify the reduced feature vector as ‘Male’ or ‘Female’. The

1We use the term “sex” in this paper, rather than “gender”, since the formerhas a genetic basis while the latter is assumed to be influenced by social,personal and psychological factors.

2It should be noted that there is a related publication that explores thisproblem in the RGB color spectrum by cropping out and using the periocularregion from a face image [5].

3The normalized iris is a rectangular rendition of the annular iris and isobtained by sampling the segmented iris region in the radial and angulardirections using a rubber sheet model [6].

D. Bobeldyk and A. Ross, "Iris or Periocular? Exploring Sex Prediction from Near Infrared Ocular Images," Proc. of the 15th International Conference of the Biometrics Special Interest Group (BIOSIG), (Darmstadt, Germany), September 2016

Page 2: D. Bobeldyk and A. Ross, Iris or Periocular? Exploring Sex ...rossarun/pubs/...Fig. 4: Tessellations applied to the four image regions. Original image taken from [1]. filters, the

(a) Ocular Image (b) Iris-Only Image (c) Iris-Excluded Ocular Image

Fig. 1: How much sex information is encoded in the iris compared to the surrounding periocular region? Original image takenfrom [1].

Fig. 2: The top row shows sample ocular images from male subjects, while the bottom row shows samples from femalesubjects. The images are from [1].

paper does not indicate if a subject disjoint4 training and testdataset was used; however, it states that only left iris imageswere used. Their best performance with bias reduction due toethnicity was ‘upwards of 80% with Bagging’.

Bansal et al. [9] were able to achieve an 83.06% sexclassification accuracy using statistical and wavelet featuresalong with an SVM classifier. Occlusions from the iris regionwere removed (i.e., eyelids, eyelashes) using an unspecifiedmasking algorithm. The size of their dataset, however, wasquite small with only 150 subjects and 300 iris images. 100of the subjects were male and 50 of the subjects were female. Itis not clear if they used a subject-disjoint evaluation protocol.

An earlier 2011 paper by Lagree and Bowyer [10] usedtexture descriptors to predict gender from an NIR iris image,and explicitly stated that they had used a subject-disjointtraining and test set. They collected a total of 600 imagesfrom 60 male and 60 female subjects. The iris images werenormalized using Daugman’s rubber sheet method [8] butusing a different sampling frequency, resulting in a normalized

4A subject disjoint experimental protocol stipulates that subjects in thetraining and test sets should be mutually exclusive.

iris image of 40x240 (as opposed to 20x240). The featureswere extracted using 6 spot/line detectors and 3 Laws’ texturemeasures [11]. They used the SMO support vector algorithmbut were unable to achieve an accuracy greater than 62% onthis dataset.

III. FEATURE EXTRACTION

Existence of sex-specific attributes in the iris has beenalluded to in the medical literature [12], [13], [14]. From acomputer vision standpoint, it is essential to use a texturedescriptor that can potentially extract these attributes. Whilea number of texture descriptors have been discussed in theliterature, the use of Binarized Statistical Image Features(BSIF) is explored in the context of this work, primarilybecause our preliminary experiments suggest that this operatoroutperforms other descriptors on the task of sex classification.

Kannala and Rahtu [15] introduced the BSIF texture de-scriptor. In their paper, the authors showed that BSIF wasable to outperform both Local Binary Patterns and Local PhaseQuantization on the Outex and CURet texture datasets [15].BSIF projects the image into a subspace generated by con-volving the image with pre-learned filters. In order to learn the

D. Bobeldyk and A. Ross, "Iris or Periocular? Exploring Sex Prediction from Near Infrared Ocular Images," Proc. of the 15th International Conference of the Biometrics Special Interest Group (BIOSIG), (Darmstadt, Germany), September 2016

Page 3: D. Bobeldyk and A. Ross, Iris or Periocular? Exploring Sex ...rossarun/pubs/...Fig. 4: Tessellations applied to the four image regions. Original image taken from [1]. filters, the

Fig. 3: Block diagram depicting the various stages of the sex prediction algorithm used in this work

(a) Ocular Image(b) Iris-Only Image

(c) Iris-Excluded Ocular Image

Fig. 4: Tessellations applied to the four image regions. Original image taken from [1].

filters, the authors randomly sampled 50,000 patches of sizek × k from the 13 natural images provided in [16]. PrincipalComponents Analysis was then applied to the sampled datakeeping only the top n principal components. The n principalcomponents were then subjected to a whitening transform.Independent Component Analysis was then applied to thedimension reduced data resulting in n filters each of size k×k(after reshaping each independent component vector into amatrix).

Each of the generated filters is convolved with the inputimage. The spatial filter responses due to each filter are thenbinarized by comparing them to a threshold. Thus, at eachpixel there are n binary responses corresponding to the nfilters, i.e., each pixel can now be encoded as a n-bit string.This binary string is then converted to a decimal value.

For example, if the binarized responses for the first, second,third, fourth and fifth filter are 0, 1, 1, 0, and 1 respectively, theresulting decimal value would be 13, since (01101)2 = (13)10.

When the aforementioned procedure is applied to an image,the resulting response matrix will be the same size as theimage. Each response will be in the integral range of [0, 2n−1].For example, when using n = 6 filters, the response values willfall in the range of [0, 63]. The aforementioned process wasrepeated by changing n in the interval [5,12] and using thefollowing patch sizes: 3×3, 5×5, 7×7, 9×9, 11×11, 13×13,15×15, 17×17. If the raw response values were used to createa feature vector, the length of the feature vector would be thesame size as the image. In order to reduce the size of thefeature vector and to obtain local statistical information, the

Fig. 5: Geometrically adjusted ocular image

image was tesselated into 20×20 square regions (see Figure4). A histogram of the BSIF responses was calculated for eachregion. Each of the regional histograms was normalized andthen concatenated to form a single feature vector. This vectorwas then input to a trained binary SVM classifier in order topredict the sex: male or female.

IV. BIOCOP DATASET

The anonymized BioCOP iris dataset was used in thisresearch. The images in the dataset were obtained using a near-infrared (NIR) sensor. Using a commercial off-the-shelf irisSDK, the center of the iris and its radius were automaticallylocated. The iris was then centered horizontally, and the imagewas geometrically scaled such that the iris had a fixed radiusof 120 pixels. The scaled image was then cropped around therepositioned iris region so as to have a 40-pixel border below

D. Bobeldyk and A. Ross, "Iris or Periocular? Exploring Sex Prediction from Near Infrared Ocular Images," Proc. of the 15th International Conference of the Biometrics Special Interest Group (BIOSIG), (Darmstadt, Germany), September 2016

Page 4: D. Bobeldyk and A. Ross, Iris or Periocular? Exploring Sex ...rossarun/pubs/...Fig. 4: Tessellations applied to the four image regions. Original image taken from [1]. filters, the

TABLE I: The subset of the BioCOP iris dataset that was used in our experiments

Sex #of Subjects #Left Images #Right Images Total # ImagesMale 580 889 831 1720

Female 503 822 772 1594

(a) Before (b) After

Fig. 6: Example of a geometrically adjusted image. Original image taken from [1].

the iris and 100-pixel borders on the top and sides. The size ofthe scaled and cropped image was 380×440. See Figure 5. Atotal of 181 images, corresponding to about 5% of the entiredataset, were discarded during this step (for example, someimages did not include the whole iris or could not be centeredappropriately). The final dataset that was used consisted of580 male subjects with 1720 images and 503 female subjectswith 1594 images (please see Table 1 for a more completebreakdown). For each subject, images from both the left andright irides were included when available.

V. EXPERIMENTS

Previous work [17] has demonstrated success when the uni-form LBP(8,2) texture operator was used for sex classificationof the iris. However, our initial experiments showed that BSIFoutperforms uLBP(8,2) in this problem domain.5

We conduct experiments to compare the sex predictionaccuracy of the iris with that of the periocular region. Fromeach geometrically adjusted ocular image in the dataset, threedifferent sub-images were extracted: iris-excluded ocular im-age, iris-only image, and normalized iris-only image. Thisresulted in the following four regions.

Ocular Image: This is the entire scaled and croppedoperational iris image (380× 440). See Figure 1(a).

Iris-Only Image: The portion of the ocular image whichencloses the entire iris region. The center of the imagecoincides with the center of the iris and the width of theimage is twice the iris radius resulting in 240 × 240 images.No masking was performed to remove the eyelid or eyelashpixels. See Figure 1(b)(ii).

5In [17] the authors claim that their dataset has 1500 unique subjects, thoughour investigation has revealed a much smaller number of distinct subjects. Theauthors of [17] have confirmed via email that there were significant errors intheir subject labels. When the incorrect labels are used, the proposed BSIFapproach resulted in a 96.1% accuracy on the left iris compared to the 91.33%accuracy obtained in [17]. The high accuracy is due to overlapping subjectsin the training and test sets that is a consequence of incorrect labeling.

Normalized Iris-Only Image: The unwrapped iris-onlyimage using Daugman’s rubber sheet method [8]. The iris wassampled 20 times radially and 240 times angularly resultingin a 20× 240 rectangular image. See Figure 1(b)(i).

Iris-Excluded Ocular Image: The ocular image with theiris-only region excluded. The portion of the image that wasremoved was zeroed out; essentially creating a black squarein the middle of each of the images. See Figure 1(c).

In order to capture both local and global spatial information,each image was tessellated into 20×20 blocks. Due to thesmall size of the normalized iris-only image, it was tesselatedinto 10×10 blocks. The BSIF operator was applied to theentire image and a histogram of the BSIF responses wascomputed for each block. Each histogram value was dividedby the sum of the histogram values for that block, therebynormalizing it. The normalized histograms were concatenatedtogether to form a feature vector that was input to a SupportVector Machine with a linear kernel.6

A. Results

Each experiment was conducted using 60% of the subjectsin the BioCOP dataset for training and 40% for testing.This subject-disjoint partitioning exercise was done 5 times.Further, the impact of the number of filters (the bit length,n) and the size of each filter (k) on prediction accuracywas studied. Figures 7, 8, 9 and 10 report the accuraciescorresponding to the four regions considered in this work.Results for the left and right eyes are shown separately ineach figure. In each graph, the average classification accuracy(over the 5 different trials) for different combinations of nand k is reported. While change in filter size does not seemto have a drastic impact on sex prediction for all 4 regions,change in bit length does have a somewhat discernible impact.

6For some combinations of k and n, the quadratic kernel outperformed thelinear kernel. This has been noted in the legend of the performance graphs.

D. Bobeldyk and A. Ross, "Iris or Periocular? Exploring Sex Prediction from Near Infrared Ocular Images," Proc. of the 15th International Conference of the Biometrics Special Interest Group (BIOSIG), (Darmstadt, Germany), September 2016

Page 5: D. Bobeldyk and A. Ross, Iris or Periocular? Exploring Sex ...rossarun/pubs/...Fig. 4: Tessellations applied to the four image regions. Original image taken from [1]. filters, the

3x3 5x5 7x7 9x9 11x11 13x13 15x15 17x17

BSIF Filter Size

0.6

0.65

0.7

0.75

0.8

0.85

0.9

Acc

urac

y

Left Ocular Images

Bit Size 5Bit Size 6Bit Size 7Bit Size 8Bit Size 9Bit Size 10Bit Size 11Bit Size 12

3x3 5x5 7x7 9x9 11x11 13x13 15x15 17x17

BSIF Filter Size

0.6

0.65

0.7

0.75

0.8

0.85

0.9

Acc

urac

y

Right Ocular Images

Bit Size 5Bit Size 6Bit Size 7Bit Size 8Bit Size 9Bit Size 10Bit Size 11Bit Size 12

Fig. 7: Ocular image: Sex prediction average accuracies for various BSIF bit lengths and filter sizes

3x3 5x5 7x7 9x9 11x11 13x13 15x15 17x17

BSIF Filter Size

0.6

0.65

0.7

0.75

0.8

0.85

0.9

Acc

urac

y

Left Normalized Iris-Only

Bit Size 5 - Quadratic SVMBit Size 6 - Quadratic SVMBit Size 7 - Quadratic SVMBit Size 8 - Linear SVMBit Size 9 - Linear SVMBit Size 10 - Linear SVMBit Size 11 - Linear SVMBit Size 12 - Linear SVM

3x3 5x5 7x7 9x9 11x11 13x13 15x15 17x17

BSIF Filter Size

0.6

0.65

0.7

0.75

0.8

0.85

0.9

Acc

urac

y

Right Normalized Iris-Only

Bit Size 5 - Quadratic SVMBit Size 6 - Quadratic SVMBit Size 7 - Quadratic SVMBit Size 8 - Linear SVMBit Size 9 - Linear SVMBit Size 10 - Linear SVMBit Size 11 - Linear SVMBit Size 12 - Linear SVM

Fig. 8: Normalized iris-only image: Sex prediction average accuracies for various BSIF bit lengths and filter sizes

3x3 5x5 7x7 9x9 11x11 13x13 15x15 17x17

BSIF Filter Size

0.6

0.65

0.7

0.75

0.8

0.85

0.9

Acc

urac

y

Left Iris-Only Bit Size 5Bit Size 6Bit Size 7Bit Size 8Bit Size 9Bit Size 10Bit Size 11Bit Size 12

3x3 5x5 7x7 9x9 11x11 13x13 15x15 17x17

BSIF Filter Size

0.6

0.65

0.7

0.75

0.8

0.85

0.9

Acc

urac

y

Right Iris-Only Bit Size 5Bit Size 6Bit Size 7Bit Size 8Bit Size 9Bit Size 10Bit Size 11Bit Size 12

Fig. 9: Iris-only image: Sex prediction average accuracies for various BSIF bit lengths and filter sizes

3x3 5x5 7x7 9x9 11x11 13x13 15x15 17x17

BSIF Filter Size

0.6

0.65

0.7

0.75

0.8

0.85

0.9

Acc

urac

y

Left Iris-Excluded

Bit Size 5Bit Size 6Bit Size 7Bit Size 8Bit Size 9Bit Size 10Bit Size 11Bit Size 12

3x3 5x5 7x7 9x9 11x11 13x13 15x15 17x17

BSIF Filter Size

0.6

0.65

0.7

0.75

0.8

0.85

0.9

Acc

urac

y

Right Iris-Excluded

Bit Size 5Bit Size 6Bit Size 7Bit Size 8Bit Size 9Bit Size 10Bit Size 11Bit Size 12

Fig. 10: Iris-excluded ocular image: Sex prediction average accuracies for various BSIF bit lengths and filter sizes

D. Bobeldyk and A. Ross, "Iris or Periocular? Exploring Sex Prediction from Near Infrared Ocular Images," Proc. of the 15th International Conference of the Biometrics Special Interest Group (BIOSIG), (Darmstadt, Germany), September 2016

Page 6: D. Bobeldyk and A. Ross, Iris or Periocular? Exploring Sex ...rossarun/pubs/...Fig. 4: Tessellations applied to the four image regions. Original image taken from [1]. filters, the

Ocular Iris-Excluded Ocular Iris-Only Iris-Only Normalized0.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

Acc

urac

y

Left Ocular BSIF: 10-bit, 9x9 Filter Size

MaleOverallFemale

Fig. 11: Results of the sex prediction accuracy for a particular combination of k and n on each of the four regions consideredin our experiments

In general the smallest bit lengths (5-6) were outperformed bythe slightly larger bit lengths (7-10). Increasing the bit lengthto 11 or 12, however, seemed to lower the accuracy for all 4regions.

Figure 11 shows the male and female classification ac-curacies, along with the overall accuracy, for each of thefour regions. The performance corresponds to the 10-bit BSIFoperator with 9 × 9 filters. The ocular region and the iris-excluded ocular region exhibit the best performance while thenormalized iris-only image exhibits the worst performance,with almost a 20% difference in performance over the ocularregion. Further, the male classification accuracies are observedto be higher than the female classification accuracies. Thiscould be partly attributed to the larger number of male subjectsthan female subjects in the dataset.

Table 2 shows the prediction relationship between the leftiris-excluded ocular region and the left iris-only region. Thevalues in this table are based on the 10-bit BSIF operator with9 × 9 filters. The table indicates that there is a potential forfusing the outputs of these two regions which could possiblyresult in a higher overall prediction accuracy.

VI. DISCUSSION

A number of different observations can be made based onthe experiments conducted in this work.

• Both the iris and surrounding ocular region manifest sex-specific attributes that can be captured using a texturedescriptor.

• The ocular region surrounding the iris results in bet-ter sex classification accuracy than the iris-only region(normalized or non-normalized). We speculate that themorphology and structure of the ocular region providesex-specific cues in the iris-excluded and extended ocularregions. For the iris-only region, the sex-specific cuescould be due to the stromal texture with its crypts.

• The ocular region - which could be viewed as the spatialaddition of the iris-excluded region to the iris-only region- provides only a modest increase in accuracy over theiris-excluded ocular region. This suggests that (also seeTable 2) there could be a better way to combine theextracted features from iris-only region with that of theiris-excluded region (as opposed to simply concatenatingthem into a single feature vector).

• The non-normalized NIR iris image results in better sexprediction performance than the normalized iris region,thereby suggesting that the normalization process maybe filtering out some useful information.

• The proposed approach outperforms the previous bestresults reported by Lagree and Bowyer [10] that is knownto have a subject-disjoint train-test protocol.7 In [10], thebest performance was 62%, while in this paper the bestperformance is 85.7%.

7Bansal et al. [9] use a 10-fold cross-validation. So the number of testimages in each trial is only 30 images. Given this extremely limited numberof test images, we do not include their results in our comparison.

D. Bobeldyk and A. Ross, "Iris or Periocular? Exploring Sex Prediction from Near Infrared Ocular Images," Proc. of the 15th International Conference of the Biometrics Special Interest Group (BIOSIG), (Darmstadt, Germany), September 2016

Page 7: D. Bobeldyk and A. Ross, Iris or Periocular? Exploring Sex ...rossarun/pubs/...Fig. 4: Tessellations applied to the four image regions. Original image taken from [1]. filters, the

TABLE II: Percentage of test images correctly/incorrectly classified by the iris-excluded ocular region and correctly/incorrectlyclassified by the iris-only region (Left eye, BSIF parameters: bit length = 10, filter size = 9× 9)

Iris-OnlyCorrect Incorrect

Iris-Excluded Ocular Correct 64.3± 2% 18.4± 0.7%Incorrect 8.7± 1% 8.6± 0.4%

The size of the feature vectors generated from the ocularand iris-only images also vary substantially in size. For 5-bit BSIF, the ocular image results in a feature vector oflength 13,376 while the iris-only image results in a featurevector that is approximately one third of that (4,608). Thisdisparity in feature dimensions may also have impacted the sexclassification accuracies of the two regions. Even though irisrecognition algorithms use the normalized iris-only image, itappears that for sex prediction, the non-normalized iris regionand the surrounding periocular region provide more cues.

VII. FUTURE WORK

In the current work, the same feature descriptor (viz., BSIF)was used to encode both the iris and the periocular regions.However, it is necessary to investigate if these regions haveto be encoded using different feature descriptors prior tocombining them for the sex prediction task. It is also necessaryto determine if race or age of the subject impacts the accuracyof sex prediction.

We would like to study the possibility of combining theresults of multiple patch sizes (k) and multiple filters (n)of the BSIF approach in order to further improve the sexclassification accuracy. We would also like to combine theresults of the left and right ocular regions for improvingperformance. However, it is likely that there is an upper boundon accuracy that is dictated by nature itself that would preemptthe possibility of obtaining perfect classification.

The BSIF feature vector used in this work could also beapplied to the problem of race and age classification fromthe ocular image. Our preliminary investigation in this regardis promising. We also plan on developing a more holisticapproach that predicts sex and race (and possibly age group)simultaneously.

VIII. ACKNOWLEDGEMENT

This work was partially funded by NSF-CITeR.

REFERENCES

[1] J. S. Doyle, K. W. Bowyer, and P. J. Flynn, “Variation in accuracy oftextured contact lens detection based on sensor and lens pattern,” in Proc.of IEEE Conference on Biometrics: Theory, Applications and Systems(BTAS), 2013, pp. 1–7.

[2] A. K. Jain, A. Ross, and K. Nandakumar, Introduction to Biometrics.New York: Springer, 2011.

[3] A. Dantcheva, P. Elia, and A. Ross, “What else does your biometricdata reveal? A survey on soft biometrics,” in IEEE Transactions onInformation Forensics And Security (TIFS), vol. 11, 2016, pp. 441–467.

[4] A. K. Jain, S. C. Dass, and K. Nandakumar, “Can soft biometric traitsassist user recognition?” in Defense and Security. International Societyfor Optics and Photonics, April 2004, pp. 561–572.

[5] J. Merkow, B. Jou, and M. Savvides, “An exploration of gender identi-fication using only the periocular region,” in Proc. of IEEE Conferenceon Biometrics: Theory, Applications and Systems (BTAS), 2010, pp. 1–5.

[6] J. Daugman, “How iris recognition works,” IEEE Transactions onCircuits and Systems for Video Technology, vol. 14, no. 1, pp. 21–30,2004.

[7] V. Thomas, N. V. Chawla, K. W. Bowyer, and P. J. Flynn, “Learningto predict gender from iris images,” in Proc. of IEEE Conference onBiometrics: Theory, Applications and Systems (BTAS), 2007, pp. 1–5.

[8] J. Daugman, “The importance of being random: Statistical principles ofiris recognition,” Pattern recognition, vol. 36, no. 2, pp. 279–291, 2003.

[9] A. Bansal, R. Agarwal, and R. Sharma, “SVM based gender classi-fication using iris images,” in Proc. of International Conference onComputational Intelligence and Communication Networks (CICN), Nov2012, pp. 425–429.

[10] S. Lagree and K. W. Bowyer, “Predicting ethnicity and gender fromiris texture,” in IEEE International Conference on Technologies forHomeland Security (HST). IEEE, 2011, pp. 440–445.

[11] K. Laws, “Textured image segmentation,” Ph.D. dissertation, Universityof Southern California, January 1980.

[12] R. A. Sturm and M. Larsson, “Genetics of human iris colour andpatterns,” Pigment Cell & Melanoma Research, vol. 22, no. 5, pp. 544–562, 2009.

[13] M. Larsson, N. L. Pedersen, and H. Stattin, “Importance of geneticeffects for characteristics of the human iris,” Twin Research, vol. 6,no. 03, pp. 192–200, 2003.

[14] M. Larsson and N. L. Pedersen, “Genetic correlations among texturecharacteristics in the human iris,” Molecular Vision, vol. 10, pp. 821–831, 2004.

[15] J. Kannala and E. Rahtu, “BSIF: Binarized statistical image features,” inProc. of International Conference on Pattern Recognition (ICPR), 2012,pp. 1363–1366.

[16] A. Hyvarinen, J. Hurri, and P. O. Hoyer, Natural Image Statistics:A Probabilistic Approach to Early Computational Vision. SpringerScience & Business Media, 2009, vol. 39.

[17] J. E. Tapia, C. A. Perez, and K. W. Bowyer, “Gender classification fromiris images using fusion of uniform local binary patterns,” in Proc. ofECCV Workshops. Springer, 2014, pp. 751–763.

D. Bobeldyk and A. Ross, "Iris or Periocular? Exploring Sex Prediction from Near Infrared Ocular Images," Proc. of the 15th International Conference of the Biometrics Special Interest Group (BIOSIG), (Darmstadt, Germany), September 2016