research article iris recognition using image moments and...

10
Research Article Iris Recognition Using Image Moments and k-Means Algorithm Yaser Daanial Khan, 1,2 Sher Afzal Khan, 2 Farooq Ahmad, 3 and Saeed Islam 4 1 School of Science and Technology, University of Management and Technology, Lahore 54000, Pakistan 2 Department of Computer Science, AbdulWali Khan University, Mardan 23200, Pakistan 3 Faculty of Information Technology, University of Central Punjab, 1-Khayaban-e-Jinnah Road, Johar Town, Lahore 54000, Pakistan 4 Department of Mathematics, AbdulWali Khan University, Mardan 23200, Pakistan Correspondence should be addressed to Yaser Daanial Khan; [email protected] Received 11 January 2014; Accepted 19 February 2014; Published 1 April 2014 Academic Editors: Y.-B. Yuan and S. Zhao Copyright © 2014 Yaser Daanial Khan et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. is paper presents a biometric technique for identification of a person using the iris image. e iris is first segmented from the acquired image of an eye using an edge detection algorithm. e disk shaped area of the iris is transformed into a rectangular form. Described moments are extracted from the grayscale image which yields a feature vector containing scale, rotation, and translation invariant moments. Images are clustered using the k-means algorithm and centroids for each cluster are computed. An arbitrary image is assumed to belong to the cluster whose centroid is the nearest to the feature vector in terms of Euclidean distance computed. e described model exhibits an accuracy of 98.5%. 1. Introduction Identification of individuals has been an important need over the ages. Conventionally identification documents like an identity card, passport, or driving license have been utilized for the purpose. Such identification methods have been evaded several times by use of forged documents. In the digital world a login and a password or a PIN code is used for identification. Besides shoulder surfing and sniffing several other techniques have evolved which are used to crack such codes and breach security. Undoubtedly a robust identi- fication technique is essential for a safe and well supervised environment. is situation thrives the need of an identifi- cation technique using some biological inimitable features of a person. Numerous biological human features are peculiar and unique such as fingerprints, suture patterns, iris patterns, gait, and ear shapes. e patterns found in these structures are unique for every human; hence they can be used as an identification tool. In the recent past, use of iris image of a person for his identification has gained popularity. e radial and longitudinal muscles in the iris of an eye are responsible for the constrictions and dilation of the pupil. e pupil changes its size depending upon the light intensity the eye is exposed to. e muscles of iris form the texture of the iris while the presence or absence of a pigment forms the color of the iris. e color of the iris is genetically dependent, whereas the texture is not. e texture of iris forms random unique patterns for each human. A close observation of an iris may reveal pustules, rings, stripes, and undulations forming a unique pattern. In the recent past, researchers have developed several mechanisms for matching the pattern that lies within the iris. In [1] the author employs a bank of Gabor filters to form a fixed length vector from the local and global iris cha- racteristics. Iris matching is established based on the weighted Euclidean distance between the two iris images being compared. In another article by Monro et al., a tech- nique is devised using discrete cosine transform. Iris coding is based on the differences of discrete cosine transform coeffi- cients of overlapped angular patches from normalized iris images [2]. Certain researchers have employed various sta- tistical models for the purpose. A nonparametric statistical model, namely, neural networks (NN), is used for pattern matching and data compression in [35]. e image pro- cessing technique using specially designed kernels for iris recognition is used to capture local characteristics so as to Hindawi Publishing Corporation e Scientific World Journal Volume 2014, Article ID 723595, 9 pages http://dx.doi.org/10.1155/2014/723595

Upload: others

Post on 29-Jul-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Research Article Iris Recognition Using Image Moments and ...downloads.hindawi.com/journals/tswj/2014/723595.pdf · Image moments and method of computation of moments are described

Research ArticleIris Recognition Using Image Moments and k-Means Algorithm

Yaser Daanial Khan12 Sher Afzal Khan2 Farooq Ahmad3 and Saeed Islam4

1 School of Science and Technology University of Management and Technology Lahore 54000 Pakistan2Department of Computer Science AbdulWali Khan University Mardan 23200 Pakistan3 Faculty of Information Technology University of Central Punjab 1-Khayaban-e-Jinnah Road Johar Town Lahore 54000 Pakistan4Department of Mathematics AbdulWali Khan University Mardan 23200 Pakistan

Correspondence should be addressed to Yaser Daanial Khan ydkucpedupk

Received 11 January 2014 Accepted 19 February 2014 Published 1 April 2014

Academic Editors Y-B Yuan and S Zhao

Copyright copy 2014 Yaser Daanial Khan et al This is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work is properlycited

This paper presents a biometric technique for identification of a person using the iris image The iris is first segmented from theacquired image of an eye using an edge detection algorithmThe disk shaped area of the iris is transformed into a rectangular formDescribed moments are extracted from the grayscale image which yields a feature vector containing scale rotation and translationinvariant moments Images are clustered using the k-means algorithm and centroids for each cluster are computed An arbitraryimage is assumed to belong to the cluster whose centroid is the nearest to the feature vector in terms of Euclidean distance computedThe described model exhibits an accuracy of 985

1 Introduction

Identification of individuals has been an important needover the ages Conventionally identification documents likean identity card passport or driving license have beenutilized for the purpose Such identification methods havebeen evaded several times by use of forged documents Inthe digital world a login and a password or a PIN code isused for identification Besides shoulder surfing and sniffingseveral other techniques have evolvedwhich are used to cracksuch codes and breach security Undoubtedly a robust identi-fication technique is essential for a safe and well supervisedenvironment This situation thrives the need of an identifi-cation technique using some biological inimitable features ofa person Numerous biological human features are peculiarand unique such as fingerprints suture patterns iris patternsgait and ear shapes The patterns found in these structuresare unique for every human hence they can be used as anidentification tool In the recent past use of iris image of aperson for his identification has gained popularityThe radialand longitudinal muscles in the iris of an eye are responsiblefor the constrictions and dilation of the pupil The pupilchanges its size depending upon the light intensity the eye

is exposed to The muscles of iris form the texture of theiris while the presence or absence of a pigment forms thecolor of the irisThe color of the iris is genetically dependentwhereas the texture is not The texture of iris forms randomunique patterns for each humanA close observation of an irismay reveal pustules rings stripes and undulations forminga unique pattern

In the recent past researchers have developed severalmechanisms for matching the pattern that lies within theiris In [1] the author employs a bank of Gabor filters toform a fixed length vector from the local and global iris cha-racteristics Iris matching is established based on theweighted Euclidean distance between the two iris imagesbeing compared In another article by Monro et al a tech-nique is devised using discrete cosine transform Iris codingis based on the differences of discrete cosine transform coeffi-cients of overlapped angular patches from normalized irisimages [2] Certain researchers have employed various sta-tistical models for the purpose A nonparametric statisticalmodel namely neural networks (NN) is used for patternmatching and data compression in [3ndash5] The image pro-cessing technique using specially designed kernels for irisrecognition is used to capture local characteristics so as to

Hindawi Publishing Corporatione Scientific World JournalVolume 2014 Article ID 723595 9 pageshttpdxdoiorg1011552014723595

2 The Scientific World Journal

produce discriminating texture features in [6] Several sortsof transformations also prove helpful in extracting usefulfeatures from an iris image This feature vector is furtherused to form a classification approach for identifying a personbased on his iris image [7 8] In the groundbreaking work byDaugman the iris recognition principle is based on the fail-ure of statistical independence tests on iris phase structureencoded by multiscale quadrature wavelets The combina-torial complexity of this phase information across differentpersons generates discriminating entropy enabling the mostprobable decision about a personrsquos identity [9]

Most of the techniques based on feature extraction aredesigned for image of a certain fixed resolution They fail toprovide the desired result for the same images with differentresolution This characteristic implies that the model is notscale invariant Techniques making use of NN incorporate atime taking training procedure At times this training pro-cess may prove to be tricky rendering the model unable toyield quick results On the other hand some techniques thatmake use of certain filters may produce undesired results ifthe image is rotated which implies that such models are notrotation invariant In this underlying paper a scale and rota-tion invariant technique for the same purpose is describedThe proposed technique requires little training after whichresults are produced instantly It is based on the use of imagemoments Moments are properties that describe the char-acteristics of a certain distribution of data Image moments(namely Hu moments) are a quantitative measure of theshape of distribution formed by data collected as image pixelintensities and their locations [10]

In the proposed work the iris is segmented from an eyeimage Image moments are computed from the segmentedgrayscale image Classification of an iris is performed bythe k-means algorithm The composition of the paper is asdescribed here Section 2 gives an overview of iris recog-nition process Section 3 explains the method used for irissegmentation Section 4 gives a method for transformingthe radial information of the iris into a rectangular formSection 5 explains how thismethod can be further optimizedImage moments and method of computation of momentsare described in Section 6 Section 7 describes the adoptionof k-means algorithm for clustering and classification usingmoments information Some of the results are discussed inSections 8 and 9 presents some conclusions

2 Iris Recognition

Initially the image of an eye is acquired by a device callediriscope specifically designed for eye image acquisition at ahigh resolution A large database of such images is collectedhaving several classesThe iris within the image is segmentedusing an accurate and sufficiently fast technique The irisimage is of radial nature rather than rectangular whichmakes it unsuitable to be processed by any mathematical orstatistical model of linear nature There are two approachesto resolve this problem The first approach is to adapt amodel capable of processing data in its inherent radial formOther approaches require transformation of the radial datainto multidimensional linear form such that the information

pertaining to iris texture is retained In this piece of work thelatter approach is adopted

The information within the texture of the rectangularimage may be used to form a probability density functionThe image moments quantify the characteristics of thisdistribution Using these raw moments translation scaleand rotation invariantmoments are computed Accumulatedthese moments describe the characteristics of the pattern ofthe iris This forms a feature vector which is later used forclassification of iris images

3 Iris Segmentation

Each image in the database contains the iris pattern which isof interest the rest of the image is of no use and therefore isnot processed The iris is extracted from the image using thesegmentation process described in [5] The iris is modeledas a disk-like structure consisting of two concentric circles(see Figure 2) The noise in the eye image is suppressedusing numerous iterations of median filtering [11] The imagewith reduced noise is filtered to extract edges using an edgedetection algorithm like the Canny [12] or the Sobel [13] filteras shown in Figure 1(a) Now using the resultant image theiris outline is extracted The image is scanned top to bottomleft to right line by line Each point on the outer and the inneredge is stored in two separate arraysThese points are furtherused to determine the center and the radii of the concentriccircles forming the iris and the pupil as shown in Figure 1(b)Assuming that the outline of the iris is a circle a point (119909 119910)on the circle with the center at (minus119892 minus119891) satisfies the equation

1199092+ 1199102+ 2119892119909 + 2119891119909 + 119888 = 0 (1)

And the radius of the circle is given as

119903 = radic1198922 + 1198912 minus 119888 (2)

Choosing any three arbitrary points from the array contain-ing points of the circle a system of simultaneous equations isformed The solution to 119888 119891 and 119892 in terms of the selectedthree points are derived from the system and is given as

119892 = (1199092

1minus 1199092

3+ 1199102

1minus 1199102

3) (1199102minus 1199101)

minus (1199092

1minus 1199092

2+ 1199102

1minus 1199102

2) (1199103minus 1199101))

times (2 [(1199093minus 1199091) (1199102minus 1199101)

+ (1199091minus 1199092) (1199103minus 1199101)])minus1

119891 =11990921minus 11990922+ 11991021minus 11991022+ 2119892 (119909

1minus 1199092)

2 (1199102minus 1199101)

(3)

where (1199091 1199101) (1199092 1199102) and (119909

3 1199103) are the three arbitrary

pointsPutting the values of 119891 and 119892 the value of 119888 is determined

from the following equation

119888 = minus1199092

1minus 1199102

1minus 2119892119909

1minus 2119891119910

1 (4)

The Scientific World Journal 3

(a) The figure shows an iris image after edge detection (b) The figure shows the disk shaped characteristicsof iris Note the circles overlapping the inner andouter circular edges

Figure 1 The figure depicts iris image after edge detection making disk shaped edges apparent

r

R

Θ

Figure 2 Transforming the radial iris into rectangular form

Moreover the radius 119903 is determined using (2)The center andthe radii of both the concentric circles are determined in thedescribed manner The information within the inner circleis left out as it encompasses the pupil while the informationbound in between the inner and outer circle contains thesignificant and unique iris pattern Several triplets of circlepoints are used to compute the center of each circle The bestestimation of center is achieved by discarding extreme centerpoints and then taking the mean of the rest of the points Foreach center point (119909

119894 119910119894) of inner circle and for each center

point (119906119894 V119894) for outer circle the mean (119909

119888 119910119888) is computed as

119909119888=sum119899

119894=1119909119894+ sum119899

119894=1119906119894

2119899

119910119888=sum119899

119894=1119910119894+ sum119899

119894=1V119894

2119899

(5)

Similarly averages for the radius of the inner circle 119903119898and the

radius for outer circle 119877119898are computed as

119903119898=sum119899

119894=1119903119894

119899

119877119898=sum119899

119894=1119877119894

119899

(6)

The pattern formed by the radial and longitudinal muscles ofthe iris is of interest Further this pattern is extracted to forma rectangular image A computationally moderate solution tothe problem must provide a faster transformation methodOne such method is discussed in [5] which transforms theradial image into rectangular form and further makes use ofthemidpoint algorithm to optimize it as described in the nextsection

4 Radial to Linear Transformation

An arbitrary Cartesian point (1199091015840 1199101015840) anywhere on the disk-like structure having parametric coordinates (119903 120579) is given as

1199091015840= 119903 sdot cos (120579)

1199101015840= 119903 sdot sin (120579)

(7)

Cartesian points along the line starting at parametric point(119903119898 120579) and ending at (119877

119898 120579) are discretized at appropriate

intervals and are placed in a column of a two-dimensionalarray A number of columns are collected starting from 120579 =

0∘ and incrementing it in small steps up to 120579 = 360∘The collection of these columns forms a rectangular canvascontaining the required iris pattern (see Figure 3)

The computations required to compute each point withinthe disk shaped structures are reduced by exploiting thesymmetric properties of a circle A circular shape exhibitseight-way symmetry [14]Thismeans for any computed point(119886 119887) seven more points are determined that lie on the samecircle using its symmetric properties These seven pointsare described as (minus119886 119887) (minus119887 119886) (119886 minus119887) (119887 minus119886) (minus119886 minus119887)and (minus119887 minus119886) given that the center lies at the origin Incase the center lies at an arbitrary point then these pointsare translated accordingly Use of this symmetric propertyreduces the computations eightfold Each point on the lineis determined by incrementing the 119909-coordinate in discretesteps and calculating the corresponding value of119910-coordinateusing the line equation

The 119909-coordinate and the 119910-coordinate of the pixels alonga single line making an arbitrary angle 120579 can be determined

4 The Scientific World Journal

(a) (b)

Figure 3 The figure illustrates iris images before and after radial to rectangular transformation

incrementally while the starting pixel coordinates are (1199091015840 1199101015840)and the coordinates of the endpoint pixel are (119909 119910) Basedupon the value of 119909-coordinate of previous pixel the value of119909-coordinate for next pixel is calculated by incrementing theprevious value of 119909 This value of 119909-coordinate say 119909

119898 is put

into the following equation

119910119898=119910 minus 1199101015840

119909 minus 1199091015840(119909119898minus 1199091015840) + 119910 (8)

which yields the corresponding 119910-coordinate

5 Optimizing the Algorithm

The determination of points along a line is further optimizedby the use of the midpoint method [14] The computationsrequired to yield a point along a line are reduced to mereaddition of a small incremental value The gradient 119898 iscomputed for this purpose as given in the following equation

119898 =119910 minus 1199101015840

119909 minus 1199091015840 (9)

Let

119889119910 = 119910 minus 1199101015840

119889119909 = 119909 minus 1199091015840

(10)

where the line end points are (1199091015840 1199101015840) and (119909 119910) In accordancewith the midpoint method for straight line with tangentbetween 0 and 1 the value ofΔ119864 = 2(119889119910minus119889119909) andΔ119873119864 = 2119889119910Initially the control variable 119889 = 2119889119910 minus 119889119909 If the value of119889 is positive then East Pixel is chosen and if it is negativethen North East Pixel is chosen At each step 119889 is updatedby adding Δ119864 or Δ119873119864 accordingly [5 14]

6 Pattern Recognition Using Image Moments

A perceptive action performed on intricate structures needsto quantify its attributes The state of any structure is

quantifiable into data Diversification of this data representsinteraction or changes in the state All such quantificationmethods generate finite data Data by itself is insignificantbut the information implantedwithin the data is useful Infor-mation is either extracted directly from the data itself or fromthe patterns formed by the arrangement of data Researchershave devised various models for extracting information fromdata embedded in an image Applications based on suchmodels do not add to the contents of data rather they findhidden data patterns in order to extract interesting and usefulinformation A probability density can be formed for any datasetTheparameters of the probability density function informus about the general manner in which data is distributedMoments are the characteristics of the probability densityfunction which are based on the kurtosis and skewedness ofthe probability density function Imagemoments describe theproperties of a distribution formed using the pixel data ofthe image along its axes The moments are typically chosento depict a certain interesting property of the image Suchmoment proves beneficial in extracting and summarizing theproperties of the image in order to produce useful resultsProperties of an image such as centroid area and orientationare quantified by this process Another dividend of imagemoments is that they bring together the local and globalgeometric details of a grayscale image [15]

61 Extracting Moments from an Image An image in thereal world is modeled using a Cartesian distribution function119891(119909 119910) in its analog form This function is used to providemoments of the order of (119901 + 119902) over the image plane P andis generalized as

119872119901119902

= intintP120595119901119902(119909 119910) sdot 119891 (119909 119910) 119889119909 119889119910 119901 119902 = 0 1 2 infin

(11)

where 120595119901119902

is the basis function and Ρ is the image planeEquation (11) yields a weighted average over the plane P

The Scientific World Journal 5

The basis function is designed such that it represents someinvariant features of the image Furthermore the propertiesof the basis function are passed onto moments An image isof discrete nature thus it is divided into pixels each having adiscrete intensity level Equation (11) is adopted for the digitalimage as

119872119901119902= sum119909

sum119910

120595119901119902(119909 119910) sdot 119868 (119909 119910) 119901 119902 = 0 1 2 infin

(12)

where 119868(119909 119910) is the intensity of a pixel in the digital image atthe 119909th row and 119910th column

In [10 16] the authors prove that the two-dimensionalcontinuous (119901 + 119902)th order moments are defined using theintegral

119872119901119902= ∬infin

minusinfin

119909119901119910119902119868 (119909 119910) 119889119909 119889119910 (13)

where 119891(119909 119910) lies within some finite region of the 119909119910

plane In case of digital image the integrals are replaced bysummations which is formulated as

119872119901119902=

119870

sum119909=1

119871

sum119910=1

119909119901119910119902119868 (119909 119910) (14)

where 119909119901119910119902 is the basis function119870 and 119871 are the dimensionsof the image and the 119872

119901119902is the cartesian moment for

the two-dimensional image Note that this basis function ishighly correlated that is nonorthogonal The moment 119872

00

represents the image whereas the first order moments areused to find the center of themass or the centroid of the imageis given as

119909 =11987210

11987200

119910 =11987201

11987200

(15)

where (119909 119910) is the centroid

62 Centralized Moments Once the centroid is determinedit is used to compute the centralized moments In [15] thecentral moments for two-dimensional data are given as

120583119901119902= sum119909

sum119910

(119909 minus 119909)119901(119910 minus 119910)

119902

119868 (119909 119910) (16)

where120583119901119902are the centralmomentsNote that these are similar

to Cartesian moments translated to the centroidThis depictsthe translation invariant property of the centralizedmomentswhich are always akin to the centroid of the segmented object

Further simplification of (16) up to order 3 generates thefollowing moments

12058300= 11987200

12058301= 0

12058310= 0

12058311= 11987211minus 119909119872

01= 11987211minus 119910119872

10

12058320= 11987220minus 119909119872

10

12058302= 11987202minus 119910119872

01

12058321= 11987221minus 2119909119872

11minus 119910119872

20+ 2119909211987201

12058312= 11987212minus 2119910119872

11minus 119909119872

02+ 2119910211987210

12058330= 11987230minus 3119909119872

20+ 2119909211987210

12058303= 11987203minus 3119910119872

02+ 2119910211987201

(17)

63 Scale Invariant Moments These moments are furthermade scale invariant as explained in [4 5] and are given as

120578119901119902=120583119901119902

120583120574

00

(18)

where 120578119901119902are scale normalized central moments and 120574 = (119901+

119902)2 + 1 where (119901 + 119902) ge 2

64 Image Orientation The second order central momentscontain information about the orientation of the imageUsingthese moments a covariance matrix is derived Let

1205831015840

20=12058320

12058300

=11987220

11987200

minus 1199092

1205831015840

02=12058302

12058300

=11987202

11987200

minus 1199102

1205831015840

11=12058311

12058300

=11987211

11987200

minus 119909119910

(19)

and then the covariance matrix is given as

cov [119868 (119909 119910)] = (1205831015840

20120583101584011

120583101584011

120583101584002

) (20)

Themajor andminor axes of the image intensity correlatewith the eigenvectors of the given covariance matrix Theorientation of the image is described by the eigenvector ofthe highest eigenvalue In [4] it is shown that the angle Θ iscomputed by the following equation

Θ =1

2tanminus1 (

2120583101584011

120583101584020minus 120583101584002

) (21)

where

1205831015840= 0 (22)

6 The Scientific World Journal

Using (20) the eigenvalues of the covariance matrix are easilyobtained and are given as

120582119894=120583101584020+ 120583101584002

2plusmnradic41205831015840211+ (120583101584020minus 120583101584002)2

2

(23)

Notice that these values are proportional to the square ofthe length of the eigenvector axes The difference betweenthe eigenvalues marks yet another important characteristicIt shows how elongated the image is This property is termedeccentricity and is computed as

radic1 minus1205822

1205821

(24)

65 Rotation Invariant Moments Previously we have dis-cussed translation and scale invariant moments In [16]rotation invariant moments are derived which are usuallytermed as a Hu set of invariant moments These are given asfollows

1198681= 12057820+ 12057802

1198682= (12057820minus 12057802)2

+ (212057811)2

1198683= (12057830minus 312057812)2

+ (312057821minus 12057803)2

1198684= (12057830+ 12057812)2

+ (12057821+ 12057803)2

1198685= (12057830minus 312057812) (12057830+ 12057812)

times [(12057830+ 12057812)2

minus 3(12057821+ 12057803)2

]

+ (312057821minus 12057803) (12057821+ 12057803)

times [3(12057830+ 12057812)2

minus (12057821+ 12057803)2

]

1198686= (12057820minus 12057802)

times [(12057830+ 12057812)2

minus (12057821+ 12057803)2

]

+ 412057811(12057830+ 12057812) (12057821+ 12057803)

1198687= (312057821minus 12057803) (12057830+ 12057812)

times [(12057830+ 12057812)2

minus 3(12057821+ 12057803)2

]

minus (12057830minus 312057812) (12057821+ 12057803)

times [3(12057830+ 12057812)2

minus (12057821+ 12057803)2

]

1198688= 12057811[(12057830+ 12057812)2

minus (12057803+ 12057821)2

]

minus (12057820minus 12057802) (12057830+ 12057812) (12057803+ 12057821)

(25)

Every one of the rotation invariant moments extracts a char-acteristic attribute of the image For example 119868

1represents

the moment of inertia along the centroid while 1198687extracts

skew invariant properties which are useful in differentiatingbetween images which are mirror reflections of each other[10 15ndash17]

7 Clustering for Classification

By now the iris image has been segmented and transformedinto a rectangular canvas All described moments are appliedand a feature vector is extracted namely 119868 This vectorcontains translation scale and rotation invariant and orien-tation related moments This vector corresponds to variousfeatures of the image hence it is used for classification Anunsupervised approach is adopted for classification using k-means clustering algorithm Using a set of multidimensionalobservations (119909

1 1199092 119909

119899) the k-means algorithm parti-

tions the observations into119870 sets such that119870 le 119899 generatingthe set 119878 = 119878

1 1198782 119878

119896so as to minimize the following

objective function

arg119904

min119870

sum119894=1

sum119909119895isin119878119894

10038161003816100381610038161003816119909119895minus 120583119894

10038161003816100381610038161003816

2

(26)

where 120583119894is mean of all the observations in 119878

119894 Initially

extracted moment vectors for an iris image sample areconsidered to be the initial mean

The k-means Algorithm has two major steps namely theassignment step and the update step The mean is used toassign each observation to a cluster in the assignment stepAn observation is assigned a cluster whose mean makes theclosestmatch Formally this step generated the set 119878

119894such that

119878119894= 1199091199031003816100381610038161003816119909119903 minus 119898119894

1003816100381610038161003816 le10038161003816100381610038161003816119909119903minus 119898119895

10038161003816100381610038161003816forall1 le 119895 le 119870 (27)

Also an observation 119909119903should be associated with exactly

one 119878119894even if two or more differences are found comparable

The next step is based on the identification of a cluster for theobservation established in the previous stepThemean for thecluster is recalculated as the centroid of the observations asgiven in following equation

119898119894=

110038161003816100381610038161198781198941003816100381610038161003816sum119909119895isin119878119894

119909119895 (28)

Both steps are iterated and the centroid is readjusted Thisprocess continues until there is no appreciable change in themeans At this stage themeans have converged and no furthertraining is required [18 19]

8 Results

TheCASIA database containing thousands of images belong-ing to hundreds of different people is used to gather testresults Nearly one-fourth of the iris images from each classare retained as test case while the rest are used for trainingThe distorted images within the database are rejected Irisportion of the image is marked out using the segmentationalgorithm and is later transformed into a rectangular canvasFurther the grey scale rectangular canvas of iris is used

The Scientific World Journal 7

15

10

5

00 1 2 3 4 5 6 7 8 9

times104

times104

(a)

0 1 2 3 4 5 6 7 8 9

times104

times104

10

14

12

10

8

6

4

2

0

(b)

Figure 4 Each of (a) and (b) shows different clusters Notice that all the clusters are linearly separable and can be distinguished by theirEuclidean distance from the Centroid

to compute image moment vector This vector containsinformationwhich is translation scale and rotation invariantand provides orientation information Using the k-meansalgorithm each image is assigned to a cluster The k-meansalgorithm is iterated until convergence is achieved and thecentroid of each cluster is determined Once the system isfully trained it is ready to accept an arbitrary input andprovide a match The model responds with the correlation ofan arbitrary image moments vector to a cluster if the imagebelongs to a known class In Figure 4 various clusters formedare depicted it also shows how the class of a sample is dis-tinguished based upon the Euclidean distance of the featurevector of the sample from the centroid of an arbitrary clusterMoreover Figure 5 shows a confusion matrix depicting theaccuracy of the model The confusion matrix shows that theaccuracy of the model for certain arbitrary classes is 990while the overall accuracy of the model for all the images inthe database is estimated to be 985Moreover it also reportsthe level of confidence of match based on Euclidean distanceof the sample from the centroid of the identified cluster Level0 is the highest which means that the Euclidean distance ofthe sample from the centroid of the cluster is low and level4 is the lowest which indicates that the Euclidean distance ofthe sample from the centroid of any cluster does not lie withina stipulated threshold to confidently indicate a match Figure4 shows the clusters formed using the k-means algorithm

Furthermore a number of experimentswere carried out todetermine the accuracy and efficiency of the proposed modelin comparison with other competitive models In [20] theauthors present a technique which extracts the features ofiris using fine-to-coarse approximation at different resolutionlevels determined through discrete dyadic wavelet transformzero crossing representation The resultant one-dimensionalfeature vector is used to find a match by computing variousdistances with arbitrary feature vectors Ma et al presentyet another iris recognition technique using Gabor filters

[1 6] The authors use a bank of Gabor filters to extract afixed length feature vector signifying the global and localcharacteristics of the iris image A match is established bycomputing the weighted Euclidean distance between featurevectors of arbitrary iris images Daugman in [9] relies onthe morphogenetic randomness of texture in the trabecularmeshwork of iris A failure of statistical independence teston two coded patterns from same iris indicates a matchThis method extracts the visible texture of iris from a realtime video image The image is later encoded into a com-pact sequence of multiscale quadrature 2D Gabor waveletcoefficients The most significant 256 bytes form the iriscode An exclusive OR operation is performed to generate adecision All the above-mentioned techniques including theproposed technique are executed in order to obtain resultsThe genuine acceptance rate (GAR) and false acceptance rate(FAR) are observed for each technique A receiver operatingcharacteristics (ROC) distribution is plotted for each tech-nique based on the results as shown in Figure 6 The ROCdistribution comparatively highlights the accuracy alongwith the frequency of occurrence of errors of the proposedand other current state-of-art models The following sectionbriefly provides some discussion about the proposed systemalong with an interpretation of the ROC distribution formed

9 Conclusion

Through analysis of data obtained after moments extractiona number of conclusions are inferred Images of a certainiris differing in orientation yielded varying eigenvalues andeccentricity However a change in orientation of an imagebarely affects the values of rotation invariant moments whileraw and scale invariant moments are affected Change inorientation of an image affects the Euclidean distance ofthe moment vectors from the centroid Despite this therestill remains a great probability of the image to be classified

8 The Scientific World Journal

0

000

000

000

000

000

000

000

000

000

000

00

0

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

00

0

000

00

0

000

000

00

0

000

00

0

00

0

000

00

0

000

000

000

000

00

0

00

0

00

0

000

000

000

000

000

000

000

000

000

00

0

000

00

0

00

0

00

0

00

0

000

000

000

000

000

00

0

000

000

000

000

000

000

00

0

000

00

0

000

00

0

000

000

000

000

00

0

000

000

000

000

000

000

000

00

0

00

0

00

0

000

000

000

00

0

000

00

0

000

000

000

000

00

0

000

000

000

000

00

0

000

000

000

000

00

0

000

001

051

050

00

0

000

00

0

00

0

000

00

17

83

17

8317

83

17

83

18

88

16

78

16

78

19

93

2098

15

74

15

741574

100 100 100 100 100 100 100 10000

10000

100001000010000

10000

1000010000

10000

1000010000

00 00 00 00 00 00 0010000

10000

944

56938

63

990

10

941

950

50

59

1

2

3

4

5

6

7

8

9

10

11

12

1 2 3 4 5 6 7 8 9 10 11 12

Out

put c

lass

Confusion matrix

Target class

Figure 5 The figure shows the confusion matrix for some arbitrary classes while the accuracy of the model for these classes is 990

1005

1

0995

099

0985

098

0975

097

09650001 001 01 1

AvilaLi MaProposedDaugman

Figure 6The figure illustrates the receiver operating characteristicsdistributions for different competitive techniques including theproposed one

correctly because of coherence in scale invariant momentsAlthough the model exhibits scale and rotation invariantattributes but some impairment is offered by luminosity ofthe image Two arbitrary images of the same objects yieldcomparable moments if the luminosity is the same but theymay yield differing moments in case luminosity is altered Inthe underlying research work it is assumed that the luminos-ity level will be the same for all the images as each imageis obtained by an iriscope working in similar conditionsThe model provides resilience towards variation of scale androtation as compared to other techniques which requires

coherence of phase and size The model can be furtherimproved by incorporation of a technique that will processeach image to provide uniform luminosity Furthermore theROC distribution obtained (shown in Figure 6) from all thetest cases shows that the performance of proposed model iscomparable with Daugman method while it yields a betterperformance than the methods described in [1 6 20]

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] L Ma T Tan YWang and D Zhang ldquoEfficient iris recognitionby characterizing key local variationsrdquo IEEE Transactions onImage Processing vol 13 no 6 pp 739ndash750 2004

[2] D M Monro S Rakshit and D Zhang ldquoDCT-bsed iris recog-nitionrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 29 no 4 pp 586ndash595 2007

[3] K Roy P Bhattacharya and R C Debnath ldquoMulti-class SVMbased iris recognitionrdquo in Proceedings of the 10th InternationalConference on Computer and Information Technology (ICCITrsquo07) pp 1ndash6 Dhaka Bangladesh December 2007

[4] R H Abiyev and K Altunkaya ldquoPersonal iris recognitionusing neural networkrdquo International Journal of Security and ItsApplications vol 2 no 2 pp 41ndash50 2008

The Scientific World Journal 9

[5] Y D Khan F Ahmad and M W Anwar ldquoA neuro-cognitiveapproach for iris recognition using back propagationrdquo WorldApplied Sciences Journal vol 16 no 5 pp 678ndash685 2012

[6] L Ma Y Wang and T Tan ldquoIris recognition based on multi-channel Gabor filteringrdquo in Proceedings of the 5th Asian Con-ference on Computer Vision pp 279ndash283 Melbourne Australia2002

[7] W W Boles and B Boashash ldquoA human identification tech-nique using images of the iris and wavelet transformrdquo IEEETransactions on Signal Processing vol 46 no 4 pp 1185ndash11881998

[8] K Lee S Lim O Byeon and T Kim ldquoEfficient iris recognitionthrough improvement of feature vector and classifierrdquo ETRIJournal vol 23 no 2 pp 61ndash70 2001

[9] J Daugman ldquoHow iris recognition worksrdquo IEEE Transactionson Circuits and Systems for Video Technology vol 14 no 1 pp21ndash30 2004

[10] M-K Hu ldquoVisual pattern recognition by moment invariantsrdquoIRE Tranactions on InformationTheory vol 8 no 2 pp 179ndash1871962

[11] E Arias-Castro and D L Donoho ldquoDoes median filtering trulypreserve edges better than linear filteringrdquo Annals of Statisticsvol 37 no 3 pp 1172ndash1206 2009

[12] J Canny ldquoA computational approach to edge detectionrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol8 no 6 pp 679ndash698 1986

[13] N Kanopoulos N Vasanthavada and R L Baker ldquoDesign ofan image edge detection filter using the Sobel operatorrdquo IEEEJournal of Solid-State Circuits vol 23 no 2 pp 358ndash367 1988

[14] JD FoleyComputerGraphics Principles andPractice Addison-Wesley Reading Mass USA 1996

[15] S O Belkasim M Shridhar and M Ahmadi ldquoPattern recog-nition with moment invariants a comparative study and newresultsrdquo Pattern Recognition vol 24 no 12 pp 1117ndash1138 1991

[16] J Flusser T Suk and B Zitova Moments and Moment Invari-ants in Pattern Recognition JohnWiley amp Sons Chichester UK2009

[17] W T Freeman D B Anderson P A Beardsley et al ldquoCom-puter vision for interactive computer graphicsrdquo IEEE ComputerGraphics and Applications vol 18 no 3 pp 42ndash52 1998

[18] D Aloise A Deshpande P Hansen and P Popat ldquoNP-hardnessof Euclidean sum-of-squares clusteringrdquoMachine Learning vol75 no 2 pp 245ndash248 2009

[19] J A Hartigan and M A Wong ldquoAlgorithm AS 136 a k-meansclustering algorithmrdquo Journal of the Royal Statistical Society CApplied Statistics vol 28 no 1 pp 100ndash108 1979

[20] C Sanchez-Avila and R Sanchez-Reillo ldquoTwo different ap-proaches for iris recognition using Gabor filters and multi-scale zero-crossing representationrdquo Pattern Recognition vol 38no 2 pp 231ndash240 2005

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 2: Research Article Iris Recognition Using Image Moments and ...downloads.hindawi.com/journals/tswj/2014/723595.pdf · Image moments and method of computation of moments are described

2 The Scientific World Journal

produce discriminating texture features in [6] Several sortsof transformations also prove helpful in extracting usefulfeatures from an iris image This feature vector is furtherused to form a classification approach for identifying a personbased on his iris image [7 8] In the groundbreaking work byDaugman the iris recognition principle is based on the fail-ure of statistical independence tests on iris phase structureencoded by multiscale quadrature wavelets The combina-torial complexity of this phase information across differentpersons generates discriminating entropy enabling the mostprobable decision about a personrsquos identity [9]

Most of the techniques based on feature extraction aredesigned for image of a certain fixed resolution They fail toprovide the desired result for the same images with differentresolution This characteristic implies that the model is notscale invariant Techniques making use of NN incorporate atime taking training procedure At times this training pro-cess may prove to be tricky rendering the model unable toyield quick results On the other hand some techniques thatmake use of certain filters may produce undesired results ifthe image is rotated which implies that such models are notrotation invariant In this underlying paper a scale and rota-tion invariant technique for the same purpose is describedThe proposed technique requires little training after whichresults are produced instantly It is based on the use of imagemoments Moments are properties that describe the char-acteristics of a certain distribution of data Image moments(namely Hu moments) are a quantitative measure of theshape of distribution formed by data collected as image pixelintensities and their locations [10]

In the proposed work the iris is segmented from an eyeimage Image moments are computed from the segmentedgrayscale image Classification of an iris is performed bythe k-means algorithm The composition of the paper is asdescribed here Section 2 gives an overview of iris recog-nition process Section 3 explains the method used for irissegmentation Section 4 gives a method for transformingthe radial information of the iris into a rectangular formSection 5 explains how thismethod can be further optimizedImage moments and method of computation of momentsare described in Section 6 Section 7 describes the adoptionof k-means algorithm for clustering and classification usingmoments information Some of the results are discussed inSections 8 and 9 presents some conclusions

2 Iris Recognition

Initially the image of an eye is acquired by a device callediriscope specifically designed for eye image acquisition at ahigh resolution A large database of such images is collectedhaving several classesThe iris within the image is segmentedusing an accurate and sufficiently fast technique The irisimage is of radial nature rather than rectangular whichmakes it unsuitable to be processed by any mathematical orstatistical model of linear nature There are two approachesto resolve this problem The first approach is to adapt amodel capable of processing data in its inherent radial formOther approaches require transformation of the radial datainto multidimensional linear form such that the information

pertaining to iris texture is retained In this piece of work thelatter approach is adopted

The information within the texture of the rectangularimage may be used to form a probability density functionThe image moments quantify the characteristics of thisdistribution Using these raw moments translation scaleand rotation invariantmoments are computed Accumulatedthese moments describe the characteristics of the pattern ofthe iris This forms a feature vector which is later used forclassification of iris images

3 Iris Segmentation

Each image in the database contains the iris pattern which isof interest the rest of the image is of no use and therefore isnot processed The iris is extracted from the image using thesegmentation process described in [5] The iris is modeledas a disk-like structure consisting of two concentric circles(see Figure 2) The noise in the eye image is suppressedusing numerous iterations of median filtering [11] The imagewith reduced noise is filtered to extract edges using an edgedetection algorithm like the Canny [12] or the Sobel [13] filteras shown in Figure 1(a) Now using the resultant image theiris outline is extracted The image is scanned top to bottomleft to right line by line Each point on the outer and the inneredge is stored in two separate arraysThese points are furtherused to determine the center and the radii of the concentriccircles forming the iris and the pupil as shown in Figure 1(b)Assuming that the outline of the iris is a circle a point (119909 119910)on the circle with the center at (minus119892 minus119891) satisfies the equation

1199092+ 1199102+ 2119892119909 + 2119891119909 + 119888 = 0 (1)

And the radius of the circle is given as

119903 = radic1198922 + 1198912 minus 119888 (2)

Choosing any three arbitrary points from the array contain-ing points of the circle a system of simultaneous equations isformed The solution to 119888 119891 and 119892 in terms of the selectedthree points are derived from the system and is given as

119892 = (1199092

1minus 1199092

3+ 1199102

1minus 1199102

3) (1199102minus 1199101)

minus (1199092

1minus 1199092

2+ 1199102

1minus 1199102

2) (1199103minus 1199101))

times (2 [(1199093minus 1199091) (1199102minus 1199101)

+ (1199091minus 1199092) (1199103minus 1199101)])minus1

119891 =11990921minus 11990922+ 11991021minus 11991022+ 2119892 (119909

1minus 1199092)

2 (1199102minus 1199101)

(3)

where (1199091 1199101) (1199092 1199102) and (119909

3 1199103) are the three arbitrary

pointsPutting the values of 119891 and 119892 the value of 119888 is determined

from the following equation

119888 = minus1199092

1minus 1199102

1minus 2119892119909

1minus 2119891119910

1 (4)

The Scientific World Journal 3

(a) The figure shows an iris image after edge detection (b) The figure shows the disk shaped characteristicsof iris Note the circles overlapping the inner andouter circular edges

Figure 1 The figure depicts iris image after edge detection making disk shaped edges apparent

r

R

Θ

Figure 2 Transforming the radial iris into rectangular form

Moreover the radius 119903 is determined using (2)The center andthe radii of both the concentric circles are determined in thedescribed manner The information within the inner circleis left out as it encompasses the pupil while the informationbound in between the inner and outer circle contains thesignificant and unique iris pattern Several triplets of circlepoints are used to compute the center of each circle The bestestimation of center is achieved by discarding extreme centerpoints and then taking the mean of the rest of the points Foreach center point (119909

119894 119910119894) of inner circle and for each center

point (119906119894 V119894) for outer circle the mean (119909

119888 119910119888) is computed as

119909119888=sum119899

119894=1119909119894+ sum119899

119894=1119906119894

2119899

119910119888=sum119899

119894=1119910119894+ sum119899

119894=1V119894

2119899

(5)

Similarly averages for the radius of the inner circle 119903119898and the

radius for outer circle 119877119898are computed as

119903119898=sum119899

119894=1119903119894

119899

119877119898=sum119899

119894=1119877119894

119899

(6)

The pattern formed by the radial and longitudinal muscles ofthe iris is of interest Further this pattern is extracted to forma rectangular image A computationally moderate solution tothe problem must provide a faster transformation methodOne such method is discussed in [5] which transforms theradial image into rectangular form and further makes use ofthemidpoint algorithm to optimize it as described in the nextsection

4 Radial to Linear Transformation

An arbitrary Cartesian point (1199091015840 1199101015840) anywhere on the disk-like structure having parametric coordinates (119903 120579) is given as

1199091015840= 119903 sdot cos (120579)

1199101015840= 119903 sdot sin (120579)

(7)

Cartesian points along the line starting at parametric point(119903119898 120579) and ending at (119877

119898 120579) are discretized at appropriate

intervals and are placed in a column of a two-dimensionalarray A number of columns are collected starting from 120579 =

0∘ and incrementing it in small steps up to 120579 = 360∘The collection of these columns forms a rectangular canvascontaining the required iris pattern (see Figure 3)

The computations required to compute each point withinthe disk shaped structures are reduced by exploiting thesymmetric properties of a circle A circular shape exhibitseight-way symmetry [14]Thismeans for any computed point(119886 119887) seven more points are determined that lie on the samecircle using its symmetric properties These seven pointsare described as (minus119886 119887) (minus119887 119886) (119886 minus119887) (119887 minus119886) (minus119886 minus119887)and (minus119887 minus119886) given that the center lies at the origin Incase the center lies at an arbitrary point then these pointsare translated accordingly Use of this symmetric propertyreduces the computations eightfold Each point on the lineis determined by incrementing the 119909-coordinate in discretesteps and calculating the corresponding value of119910-coordinateusing the line equation

The 119909-coordinate and the 119910-coordinate of the pixels alonga single line making an arbitrary angle 120579 can be determined

4 The Scientific World Journal

(a) (b)

Figure 3 The figure illustrates iris images before and after radial to rectangular transformation

incrementally while the starting pixel coordinates are (1199091015840 1199101015840)and the coordinates of the endpoint pixel are (119909 119910) Basedupon the value of 119909-coordinate of previous pixel the value of119909-coordinate for next pixel is calculated by incrementing theprevious value of 119909 This value of 119909-coordinate say 119909

119898 is put

into the following equation

119910119898=119910 minus 1199101015840

119909 minus 1199091015840(119909119898minus 1199091015840) + 119910 (8)

which yields the corresponding 119910-coordinate

5 Optimizing the Algorithm

The determination of points along a line is further optimizedby the use of the midpoint method [14] The computationsrequired to yield a point along a line are reduced to mereaddition of a small incremental value The gradient 119898 iscomputed for this purpose as given in the following equation

119898 =119910 minus 1199101015840

119909 minus 1199091015840 (9)

Let

119889119910 = 119910 minus 1199101015840

119889119909 = 119909 minus 1199091015840

(10)

where the line end points are (1199091015840 1199101015840) and (119909 119910) In accordancewith the midpoint method for straight line with tangentbetween 0 and 1 the value ofΔ119864 = 2(119889119910minus119889119909) andΔ119873119864 = 2119889119910Initially the control variable 119889 = 2119889119910 minus 119889119909 If the value of119889 is positive then East Pixel is chosen and if it is negativethen North East Pixel is chosen At each step 119889 is updatedby adding Δ119864 or Δ119873119864 accordingly [5 14]

6 Pattern Recognition Using Image Moments

A perceptive action performed on intricate structures needsto quantify its attributes The state of any structure is

quantifiable into data Diversification of this data representsinteraction or changes in the state All such quantificationmethods generate finite data Data by itself is insignificantbut the information implantedwithin the data is useful Infor-mation is either extracted directly from the data itself or fromthe patterns formed by the arrangement of data Researchershave devised various models for extracting information fromdata embedded in an image Applications based on suchmodels do not add to the contents of data rather they findhidden data patterns in order to extract interesting and usefulinformation A probability density can be formed for any datasetTheparameters of the probability density function informus about the general manner in which data is distributedMoments are the characteristics of the probability densityfunction which are based on the kurtosis and skewedness ofthe probability density function Imagemoments describe theproperties of a distribution formed using the pixel data ofthe image along its axes The moments are typically chosento depict a certain interesting property of the image Suchmoment proves beneficial in extracting and summarizing theproperties of the image in order to produce useful resultsProperties of an image such as centroid area and orientationare quantified by this process Another dividend of imagemoments is that they bring together the local and globalgeometric details of a grayscale image [15]

61 Extracting Moments from an Image An image in thereal world is modeled using a Cartesian distribution function119891(119909 119910) in its analog form This function is used to providemoments of the order of (119901 + 119902) over the image plane P andis generalized as

119872119901119902

= intintP120595119901119902(119909 119910) sdot 119891 (119909 119910) 119889119909 119889119910 119901 119902 = 0 1 2 infin

(11)

where 120595119901119902

is the basis function and Ρ is the image planeEquation (11) yields a weighted average over the plane P

The Scientific World Journal 5

The basis function is designed such that it represents someinvariant features of the image Furthermore the propertiesof the basis function are passed onto moments An image isof discrete nature thus it is divided into pixels each having adiscrete intensity level Equation (11) is adopted for the digitalimage as

119872119901119902= sum119909

sum119910

120595119901119902(119909 119910) sdot 119868 (119909 119910) 119901 119902 = 0 1 2 infin

(12)

where 119868(119909 119910) is the intensity of a pixel in the digital image atthe 119909th row and 119910th column

In [10 16] the authors prove that the two-dimensionalcontinuous (119901 + 119902)th order moments are defined using theintegral

119872119901119902= ∬infin

minusinfin

119909119901119910119902119868 (119909 119910) 119889119909 119889119910 (13)

where 119891(119909 119910) lies within some finite region of the 119909119910

plane In case of digital image the integrals are replaced bysummations which is formulated as

119872119901119902=

119870

sum119909=1

119871

sum119910=1

119909119901119910119902119868 (119909 119910) (14)

where 119909119901119910119902 is the basis function119870 and 119871 are the dimensionsof the image and the 119872

119901119902is the cartesian moment for

the two-dimensional image Note that this basis function ishighly correlated that is nonorthogonal The moment 119872

00

represents the image whereas the first order moments areused to find the center of themass or the centroid of the imageis given as

119909 =11987210

11987200

119910 =11987201

11987200

(15)

where (119909 119910) is the centroid

62 Centralized Moments Once the centroid is determinedit is used to compute the centralized moments In [15] thecentral moments for two-dimensional data are given as

120583119901119902= sum119909

sum119910

(119909 minus 119909)119901(119910 minus 119910)

119902

119868 (119909 119910) (16)

where120583119901119902are the centralmomentsNote that these are similar

to Cartesian moments translated to the centroidThis depictsthe translation invariant property of the centralizedmomentswhich are always akin to the centroid of the segmented object

Further simplification of (16) up to order 3 generates thefollowing moments

12058300= 11987200

12058301= 0

12058310= 0

12058311= 11987211minus 119909119872

01= 11987211minus 119910119872

10

12058320= 11987220minus 119909119872

10

12058302= 11987202minus 119910119872

01

12058321= 11987221minus 2119909119872

11minus 119910119872

20+ 2119909211987201

12058312= 11987212minus 2119910119872

11minus 119909119872

02+ 2119910211987210

12058330= 11987230minus 3119909119872

20+ 2119909211987210

12058303= 11987203minus 3119910119872

02+ 2119910211987201

(17)

63 Scale Invariant Moments These moments are furthermade scale invariant as explained in [4 5] and are given as

120578119901119902=120583119901119902

120583120574

00

(18)

where 120578119901119902are scale normalized central moments and 120574 = (119901+

119902)2 + 1 where (119901 + 119902) ge 2

64 Image Orientation The second order central momentscontain information about the orientation of the imageUsingthese moments a covariance matrix is derived Let

1205831015840

20=12058320

12058300

=11987220

11987200

minus 1199092

1205831015840

02=12058302

12058300

=11987202

11987200

minus 1199102

1205831015840

11=12058311

12058300

=11987211

11987200

minus 119909119910

(19)

and then the covariance matrix is given as

cov [119868 (119909 119910)] = (1205831015840

20120583101584011

120583101584011

120583101584002

) (20)

Themajor andminor axes of the image intensity correlatewith the eigenvectors of the given covariance matrix Theorientation of the image is described by the eigenvector ofthe highest eigenvalue In [4] it is shown that the angle Θ iscomputed by the following equation

Θ =1

2tanminus1 (

2120583101584011

120583101584020minus 120583101584002

) (21)

where

1205831015840= 0 (22)

6 The Scientific World Journal

Using (20) the eigenvalues of the covariance matrix are easilyobtained and are given as

120582119894=120583101584020+ 120583101584002

2plusmnradic41205831015840211+ (120583101584020minus 120583101584002)2

2

(23)

Notice that these values are proportional to the square ofthe length of the eigenvector axes The difference betweenthe eigenvalues marks yet another important characteristicIt shows how elongated the image is This property is termedeccentricity and is computed as

radic1 minus1205822

1205821

(24)

65 Rotation Invariant Moments Previously we have dis-cussed translation and scale invariant moments In [16]rotation invariant moments are derived which are usuallytermed as a Hu set of invariant moments These are given asfollows

1198681= 12057820+ 12057802

1198682= (12057820minus 12057802)2

+ (212057811)2

1198683= (12057830minus 312057812)2

+ (312057821minus 12057803)2

1198684= (12057830+ 12057812)2

+ (12057821+ 12057803)2

1198685= (12057830minus 312057812) (12057830+ 12057812)

times [(12057830+ 12057812)2

minus 3(12057821+ 12057803)2

]

+ (312057821minus 12057803) (12057821+ 12057803)

times [3(12057830+ 12057812)2

minus (12057821+ 12057803)2

]

1198686= (12057820minus 12057802)

times [(12057830+ 12057812)2

minus (12057821+ 12057803)2

]

+ 412057811(12057830+ 12057812) (12057821+ 12057803)

1198687= (312057821minus 12057803) (12057830+ 12057812)

times [(12057830+ 12057812)2

minus 3(12057821+ 12057803)2

]

minus (12057830minus 312057812) (12057821+ 12057803)

times [3(12057830+ 12057812)2

minus (12057821+ 12057803)2

]

1198688= 12057811[(12057830+ 12057812)2

minus (12057803+ 12057821)2

]

minus (12057820minus 12057802) (12057830+ 12057812) (12057803+ 12057821)

(25)

Every one of the rotation invariant moments extracts a char-acteristic attribute of the image For example 119868

1represents

the moment of inertia along the centroid while 1198687extracts

skew invariant properties which are useful in differentiatingbetween images which are mirror reflections of each other[10 15ndash17]

7 Clustering for Classification

By now the iris image has been segmented and transformedinto a rectangular canvas All described moments are appliedand a feature vector is extracted namely 119868 This vectorcontains translation scale and rotation invariant and orien-tation related moments This vector corresponds to variousfeatures of the image hence it is used for classification Anunsupervised approach is adopted for classification using k-means clustering algorithm Using a set of multidimensionalobservations (119909

1 1199092 119909

119899) the k-means algorithm parti-

tions the observations into119870 sets such that119870 le 119899 generatingthe set 119878 = 119878

1 1198782 119878

119896so as to minimize the following

objective function

arg119904

min119870

sum119894=1

sum119909119895isin119878119894

10038161003816100381610038161003816119909119895minus 120583119894

10038161003816100381610038161003816

2

(26)

where 120583119894is mean of all the observations in 119878

119894 Initially

extracted moment vectors for an iris image sample areconsidered to be the initial mean

The k-means Algorithm has two major steps namely theassignment step and the update step The mean is used toassign each observation to a cluster in the assignment stepAn observation is assigned a cluster whose mean makes theclosestmatch Formally this step generated the set 119878

119894such that

119878119894= 1199091199031003816100381610038161003816119909119903 minus 119898119894

1003816100381610038161003816 le10038161003816100381610038161003816119909119903minus 119898119895

10038161003816100381610038161003816forall1 le 119895 le 119870 (27)

Also an observation 119909119903should be associated with exactly

one 119878119894even if two or more differences are found comparable

The next step is based on the identification of a cluster for theobservation established in the previous stepThemean for thecluster is recalculated as the centroid of the observations asgiven in following equation

119898119894=

110038161003816100381610038161198781198941003816100381610038161003816sum119909119895isin119878119894

119909119895 (28)

Both steps are iterated and the centroid is readjusted Thisprocess continues until there is no appreciable change in themeans At this stage themeans have converged and no furthertraining is required [18 19]

8 Results

TheCASIA database containing thousands of images belong-ing to hundreds of different people is used to gather testresults Nearly one-fourth of the iris images from each classare retained as test case while the rest are used for trainingThe distorted images within the database are rejected Irisportion of the image is marked out using the segmentationalgorithm and is later transformed into a rectangular canvasFurther the grey scale rectangular canvas of iris is used

The Scientific World Journal 7

15

10

5

00 1 2 3 4 5 6 7 8 9

times104

times104

(a)

0 1 2 3 4 5 6 7 8 9

times104

times104

10

14

12

10

8

6

4

2

0

(b)

Figure 4 Each of (a) and (b) shows different clusters Notice that all the clusters are linearly separable and can be distinguished by theirEuclidean distance from the Centroid

to compute image moment vector This vector containsinformationwhich is translation scale and rotation invariantand provides orientation information Using the k-meansalgorithm each image is assigned to a cluster The k-meansalgorithm is iterated until convergence is achieved and thecentroid of each cluster is determined Once the system isfully trained it is ready to accept an arbitrary input andprovide a match The model responds with the correlation ofan arbitrary image moments vector to a cluster if the imagebelongs to a known class In Figure 4 various clusters formedare depicted it also shows how the class of a sample is dis-tinguished based upon the Euclidean distance of the featurevector of the sample from the centroid of an arbitrary clusterMoreover Figure 5 shows a confusion matrix depicting theaccuracy of the model The confusion matrix shows that theaccuracy of the model for certain arbitrary classes is 990while the overall accuracy of the model for all the images inthe database is estimated to be 985Moreover it also reportsthe level of confidence of match based on Euclidean distanceof the sample from the centroid of the identified cluster Level0 is the highest which means that the Euclidean distance ofthe sample from the centroid of the cluster is low and level4 is the lowest which indicates that the Euclidean distance ofthe sample from the centroid of any cluster does not lie withina stipulated threshold to confidently indicate a match Figure4 shows the clusters formed using the k-means algorithm

Furthermore a number of experimentswere carried out todetermine the accuracy and efficiency of the proposed modelin comparison with other competitive models In [20] theauthors present a technique which extracts the features ofiris using fine-to-coarse approximation at different resolutionlevels determined through discrete dyadic wavelet transformzero crossing representation The resultant one-dimensionalfeature vector is used to find a match by computing variousdistances with arbitrary feature vectors Ma et al presentyet another iris recognition technique using Gabor filters

[1 6] The authors use a bank of Gabor filters to extract afixed length feature vector signifying the global and localcharacteristics of the iris image A match is established bycomputing the weighted Euclidean distance between featurevectors of arbitrary iris images Daugman in [9] relies onthe morphogenetic randomness of texture in the trabecularmeshwork of iris A failure of statistical independence teston two coded patterns from same iris indicates a matchThis method extracts the visible texture of iris from a realtime video image The image is later encoded into a com-pact sequence of multiscale quadrature 2D Gabor waveletcoefficients The most significant 256 bytes form the iriscode An exclusive OR operation is performed to generate adecision All the above-mentioned techniques including theproposed technique are executed in order to obtain resultsThe genuine acceptance rate (GAR) and false acceptance rate(FAR) are observed for each technique A receiver operatingcharacteristics (ROC) distribution is plotted for each tech-nique based on the results as shown in Figure 6 The ROCdistribution comparatively highlights the accuracy alongwith the frequency of occurrence of errors of the proposedand other current state-of-art models The following sectionbriefly provides some discussion about the proposed systemalong with an interpretation of the ROC distribution formed

9 Conclusion

Through analysis of data obtained after moments extractiona number of conclusions are inferred Images of a certainiris differing in orientation yielded varying eigenvalues andeccentricity However a change in orientation of an imagebarely affects the values of rotation invariant moments whileraw and scale invariant moments are affected Change inorientation of an image affects the Euclidean distance ofthe moment vectors from the centroid Despite this therestill remains a great probability of the image to be classified

8 The Scientific World Journal

0

000

000

000

000

000

000

000

000

000

000

00

0

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

00

0

000

00

0

000

000

00

0

000

00

0

00

0

000

00

0

000

000

000

000

00

0

00

0

00

0

000

000

000

000

000

000

000

000

000

00

0

000

00

0

00

0

00

0

00

0

000

000

000

000

000

00

0

000

000

000

000

000

000

00

0

000

00

0

000

00

0

000

000

000

000

00

0

000

000

000

000

000

000

000

00

0

00

0

00

0

000

000

000

00

0

000

00

0

000

000

000

000

00

0

000

000

000

000

00

0

000

000

000

000

00

0

000

001

051

050

00

0

000

00

0

00

0

000

00

17

83

17

8317

83

17

83

18

88

16

78

16

78

19

93

2098

15

74

15

741574

100 100 100 100 100 100 100 10000

10000

100001000010000

10000

1000010000

10000

1000010000

00 00 00 00 00 00 0010000

10000

944

56938

63

990

10

941

950

50

59

1

2

3

4

5

6

7

8

9

10

11

12

1 2 3 4 5 6 7 8 9 10 11 12

Out

put c

lass

Confusion matrix

Target class

Figure 5 The figure shows the confusion matrix for some arbitrary classes while the accuracy of the model for these classes is 990

1005

1

0995

099

0985

098

0975

097

09650001 001 01 1

AvilaLi MaProposedDaugman

Figure 6The figure illustrates the receiver operating characteristicsdistributions for different competitive techniques including theproposed one

correctly because of coherence in scale invariant momentsAlthough the model exhibits scale and rotation invariantattributes but some impairment is offered by luminosity ofthe image Two arbitrary images of the same objects yieldcomparable moments if the luminosity is the same but theymay yield differing moments in case luminosity is altered Inthe underlying research work it is assumed that the luminos-ity level will be the same for all the images as each imageis obtained by an iriscope working in similar conditionsThe model provides resilience towards variation of scale androtation as compared to other techniques which requires

coherence of phase and size The model can be furtherimproved by incorporation of a technique that will processeach image to provide uniform luminosity Furthermore theROC distribution obtained (shown in Figure 6) from all thetest cases shows that the performance of proposed model iscomparable with Daugman method while it yields a betterperformance than the methods described in [1 6 20]

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] L Ma T Tan YWang and D Zhang ldquoEfficient iris recognitionby characterizing key local variationsrdquo IEEE Transactions onImage Processing vol 13 no 6 pp 739ndash750 2004

[2] D M Monro S Rakshit and D Zhang ldquoDCT-bsed iris recog-nitionrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 29 no 4 pp 586ndash595 2007

[3] K Roy P Bhattacharya and R C Debnath ldquoMulti-class SVMbased iris recognitionrdquo in Proceedings of the 10th InternationalConference on Computer and Information Technology (ICCITrsquo07) pp 1ndash6 Dhaka Bangladesh December 2007

[4] R H Abiyev and K Altunkaya ldquoPersonal iris recognitionusing neural networkrdquo International Journal of Security and ItsApplications vol 2 no 2 pp 41ndash50 2008

The Scientific World Journal 9

[5] Y D Khan F Ahmad and M W Anwar ldquoA neuro-cognitiveapproach for iris recognition using back propagationrdquo WorldApplied Sciences Journal vol 16 no 5 pp 678ndash685 2012

[6] L Ma Y Wang and T Tan ldquoIris recognition based on multi-channel Gabor filteringrdquo in Proceedings of the 5th Asian Con-ference on Computer Vision pp 279ndash283 Melbourne Australia2002

[7] W W Boles and B Boashash ldquoA human identification tech-nique using images of the iris and wavelet transformrdquo IEEETransactions on Signal Processing vol 46 no 4 pp 1185ndash11881998

[8] K Lee S Lim O Byeon and T Kim ldquoEfficient iris recognitionthrough improvement of feature vector and classifierrdquo ETRIJournal vol 23 no 2 pp 61ndash70 2001

[9] J Daugman ldquoHow iris recognition worksrdquo IEEE Transactionson Circuits and Systems for Video Technology vol 14 no 1 pp21ndash30 2004

[10] M-K Hu ldquoVisual pattern recognition by moment invariantsrdquoIRE Tranactions on InformationTheory vol 8 no 2 pp 179ndash1871962

[11] E Arias-Castro and D L Donoho ldquoDoes median filtering trulypreserve edges better than linear filteringrdquo Annals of Statisticsvol 37 no 3 pp 1172ndash1206 2009

[12] J Canny ldquoA computational approach to edge detectionrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol8 no 6 pp 679ndash698 1986

[13] N Kanopoulos N Vasanthavada and R L Baker ldquoDesign ofan image edge detection filter using the Sobel operatorrdquo IEEEJournal of Solid-State Circuits vol 23 no 2 pp 358ndash367 1988

[14] JD FoleyComputerGraphics Principles andPractice Addison-Wesley Reading Mass USA 1996

[15] S O Belkasim M Shridhar and M Ahmadi ldquoPattern recog-nition with moment invariants a comparative study and newresultsrdquo Pattern Recognition vol 24 no 12 pp 1117ndash1138 1991

[16] J Flusser T Suk and B Zitova Moments and Moment Invari-ants in Pattern Recognition JohnWiley amp Sons Chichester UK2009

[17] W T Freeman D B Anderson P A Beardsley et al ldquoCom-puter vision for interactive computer graphicsrdquo IEEE ComputerGraphics and Applications vol 18 no 3 pp 42ndash52 1998

[18] D Aloise A Deshpande P Hansen and P Popat ldquoNP-hardnessof Euclidean sum-of-squares clusteringrdquoMachine Learning vol75 no 2 pp 245ndash248 2009

[19] J A Hartigan and M A Wong ldquoAlgorithm AS 136 a k-meansclustering algorithmrdquo Journal of the Royal Statistical Society CApplied Statistics vol 28 no 1 pp 100ndash108 1979

[20] C Sanchez-Avila and R Sanchez-Reillo ldquoTwo different ap-proaches for iris recognition using Gabor filters and multi-scale zero-crossing representationrdquo Pattern Recognition vol 38no 2 pp 231ndash240 2005

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 3: Research Article Iris Recognition Using Image Moments and ...downloads.hindawi.com/journals/tswj/2014/723595.pdf · Image moments and method of computation of moments are described

The Scientific World Journal 3

(a) The figure shows an iris image after edge detection (b) The figure shows the disk shaped characteristicsof iris Note the circles overlapping the inner andouter circular edges

Figure 1 The figure depicts iris image after edge detection making disk shaped edges apparent

r

R

Θ

Figure 2 Transforming the radial iris into rectangular form

Moreover the radius 119903 is determined using (2)The center andthe radii of both the concentric circles are determined in thedescribed manner The information within the inner circleis left out as it encompasses the pupil while the informationbound in between the inner and outer circle contains thesignificant and unique iris pattern Several triplets of circlepoints are used to compute the center of each circle The bestestimation of center is achieved by discarding extreme centerpoints and then taking the mean of the rest of the points Foreach center point (119909

119894 119910119894) of inner circle and for each center

point (119906119894 V119894) for outer circle the mean (119909

119888 119910119888) is computed as

119909119888=sum119899

119894=1119909119894+ sum119899

119894=1119906119894

2119899

119910119888=sum119899

119894=1119910119894+ sum119899

119894=1V119894

2119899

(5)

Similarly averages for the radius of the inner circle 119903119898and the

radius for outer circle 119877119898are computed as

119903119898=sum119899

119894=1119903119894

119899

119877119898=sum119899

119894=1119877119894

119899

(6)

The pattern formed by the radial and longitudinal muscles ofthe iris is of interest Further this pattern is extracted to forma rectangular image A computationally moderate solution tothe problem must provide a faster transformation methodOne such method is discussed in [5] which transforms theradial image into rectangular form and further makes use ofthemidpoint algorithm to optimize it as described in the nextsection

4 Radial to Linear Transformation

An arbitrary Cartesian point (1199091015840 1199101015840) anywhere on the disk-like structure having parametric coordinates (119903 120579) is given as

1199091015840= 119903 sdot cos (120579)

1199101015840= 119903 sdot sin (120579)

(7)

Cartesian points along the line starting at parametric point(119903119898 120579) and ending at (119877

119898 120579) are discretized at appropriate

intervals and are placed in a column of a two-dimensionalarray A number of columns are collected starting from 120579 =

0∘ and incrementing it in small steps up to 120579 = 360∘The collection of these columns forms a rectangular canvascontaining the required iris pattern (see Figure 3)

The computations required to compute each point withinthe disk shaped structures are reduced by exploiting thesymmetric properties of a circle A circular shape exhibitseight-way symmetry [14]Thismeans for any computed point(119886 119887) seven more points are determined that lie on the samecircle using its symmetric properties These seven pointsare described as (minus119886 119887) (minus119887 119886) (119886 minus119887) (119887 minus119886) (minus119886 minus119887)and (minus119887 minus119886) given that the center lies at the origin Incase the center lies at an arbitrary point then these pointsare translated accordingly Use of this symmetric propertyreduces the computations eightfold Each point on the lineis determined by incrementing the 119909-coordinate in discretesteps and calculating the corresponding value of119910-coordinateusing the line equation

The 119909-coordinate and the 119910-coordinate of the pixels alonga single line making an arbitrary angle 120579 can be determined

4 The Scientific World Journal

(a) (b)

Figure 3 The figure illustrates iris images before and after radial to rectangular transformation

incrementally while the starting pixel coordinates are (1199091015840 1199101015840)and the coordinates of the endpoint pixel are (119909 119910) Basedupon the value of 119909-coordinate of previous pixel the value of119909-coordinate for next pixel is calculated by incrementing theprevious value of 119909 This value of 119909-coordinate say 119909

119898 is put

into the following equation

119910119898=119910 minus 1199101015840

119909 minus 1199091015840(119909119898minus 1199091015840) + 119910 (8)

which yields the corresponding 119910-coordinate

5 Optimizing the Algorithm

The determination of points along a line is further optimizedby the use of the midpoint method [14] The computationsrequired to yield a point along a line are reduced to mereaddition of a small incremental value The gradient 119898 iscomputed for this purpose as given in the following equation

119898 =119910 minus 1199101015840

119909 minus 1199091015840 (9)

Let

119889119910 = 119910 minus 1199101015840

119889119909 = 119909 minus 1199091015840

(10)

where the line end points are (1199091015840 1199101015840) and (119909 119910) In accordancewith the midpoint method for straight line with tangentbetween 0 and 1 the value ofΔ119864 = 2(119889119910minus119889119909) andΔ119873119864 = 2119889119910Initially the control variable 119889 = 2119889119910 minus 119889119909 If the value of119889 is positive then East Pixel is chosen and if it is negativethen North East Pixel is chosen At each step 119889 is updatedby adding Δ119864 or Δ119873119864 accordingly [5 14]

6 Pattern Recognition Using Image Moments

A perceptive action performed on intricate structures needsto quantify its attributes The state of any structure is

quantifiable into data Diversification of this data representsinteraction or changes in the state All such quantificationmethods generate finite data Data by itself is insignificantbut the information implantedwithin the data is useful Infor-mation is either extracted directly from the data itself or fromthe patterns formed by the arrangement of data Researchershave devised various models for extracting information fromdata embedded in an image Applications based on suchmodels do not add to the contents of data rather they findhidden data patterns in order to extract interesting and usefulinformation A probability density can be formed for any datasetTheparameters of the probability density function informus about the general manner in which data is distributedMoments are the characteristics of the probability densityfunction which are based on the kurtosis and skewedness ofthe probability density function Imagemoments describe theproperties of a distribution formed using the pixel data ofthe image along its axes The moments are typically chosento depict a certain interesting property of the image Suchmoment proves beneficial in extracting and summarizing theproperties of the image in order to produce useful resultsProperties of an image such as centroid area and orientationare quantified by this process Another dividend of imagemoments is that they bring together the local and globalgeometric details of a grayscale image [15]

61 Extracting Moments from an Image An image in thereal world is modeled using a Cartesian distribution function119891(119909 119910) in its analog form This function is used to providemoments of the order of (119901 + 119902) over the image plane P andis generalized as

119872119901119902

= intintP120595119901119902(119909 119910) sdot 119891 (119909 119910) 119889119909 119889119910 119901 119902 = 0 1 2 infin

(11)

where 120595119901119902

is the basis function and Ρ is the image planeEquation (11) yields a weighted average over the plane P

The Scientific World Journal 5

The basis function is designed such that it represents someinvariant features of the image Furthermore the propertiesof the basis function are passed onto moments An image isof discrete nature thus it is divided into pixels each having adiscrete intensity level Equation (11) is adopted for the digitalimage as

119872119901119902= sum119909

sum119910

120595119901119902(119909 119910) sdot 119868 (119909 119910) 119901 119902 = 0 1 2 infin

(12)

where 119868(119909 119910) is the intensity of a pixel in the digital image atthe 119909th row and 119910th column

In [10 16] the authors prove that the two-dimensionalcontinuous (119901 + 119902)th order moments are defined using theintegral

119872119901119902= ∬infin

minusinfin

119909119901119910119902119868 (119909 119910) 119889119909 119889119910 (13)

where 119891(119909 119910) lies within some finite region of the 119909119910

plane In case of digital image the integrals are replaced bysummations which is formulated as

119872119901119902=

119870

sum119909=1

119871

sum119910=1

119909119901119910119902119868 (119909 119910) (14)

where 119909119901119910119902 is the basis function119870 and 119871 are the dimensionsof the image and the 119872

119901119902is the cartesian moment for

the two-dimensional image Note that this basis function ishighly correlated that is nonorthogonal The moment 119872

00

represents the image whereas the first order moments areused to find the center of themass or the centroid of the imageis given as

119909 =11987210

11987200

119910 =11987201

11987200

(15)

where (119909 119910) is the centroid

62 Centralized Moments Once the centroid is determinedit is used to compute the centralized moments In [15] thecentral moments for two-dimensional data are given as

120583119901119902= sum119909

sum119910

(119909 minus 119909)119901(119910 minus 119910)

119902

119868 (119909 119910) (16)

where120583119901119902are the centralmomentsNote that these are similar

to Cartesian moments translated to the centroidThis depictsthe translation invariant property of the centralizedmomentswhich are always akin to the centroid of the segmented object

Further simplification of (16) up to order 3 generates thefollowing moments

12058300= 11987200

12058301= 0

12058310= 0

12058311= 11987211minus 119909119872

01= 11987211minus 119910119872

10

12058320= 11987220minus 119909119872

10

12058302= 11987202minus 119910119872

01

12058321= 11987221minus 2119909119872

11minus 119910119872

20+ 2119909211987201

12058312= 11987212minus 2119910119872

11minus 119909119872

02+ 2119910211987210

12058330= 11987230minus 3119909119872

20+ 2119909211987210

12058303= 11987203minus 3119910119872

02+ 2119910211987201

(17)

63 Scale Invariant Moments These moments are furthermade scale invariant as explained in [4 5] and are given as

120578119901119902=120583119901119902

120583120574

00

(18)

where 120578119901119902are scale normalized central moments and 120574 = (119901+

119902)2 + 1 where (119901 + 119902) ge 2

64 Image Orientation The second order central momentscontain information about the orientation of the imageUsingthese moments a covariance matrix is derived Let

1205831015840

20=12058320

12058300

=11987220

11987200

minus 1199092

1205831015840

02=12058302

12058300

=11987202

11987200

minus 1199102

1205831015840

11=12058311

12058300

=11987211

11987200

minus 119909119910

(19)

and then the covariance matrix is given as

cov [119868 (119909 119910)] = (1205831015840

20120583101584011

120583101584011

120583101584002

) (20)

Themajor andminor axes of the image intensity correlatewith the eigenvectors of the given covariance matrix Theorientation of the image is described by the eigenvector ofthe highest eigenvalue In [4] it is shown that the angle Θ iscomputed by the following equation

Θ =1

2tanminus1 (

2120583101584011

120583101584020minus 120583101584002

) (21)

where

1205831015840= 0 (22)

6 The Scientific World Journal

Using (20) the eigenvalues of the covariance matrix are easilyobtained and are given as

120582119894=120583101584020+ 120583101584002

2plusmnradic41205831015840211+ (120583101584020minus 120583101584002)2

2

(23)

Notice that these values are proportional to the square ofthe length of the eigenvector axes The difference betweenthe eigenvalues marks yet another important characteristicIt shows how elongated the image is This property is termedeccentricity and is computed as

radic1 minus1205822

1205821

(24)

65 Rotation Invariant Moments Previously we have dis-cussed translation and scale invariant moments In [16]rotation invariant moments are derived which are usuallytermed as a Hu set of invariant moments These are given asfollows

1198681= 12057820+ 12057802

1198682= (12057820minus 12057802)2

+ (212057811)2

1198683= (12057830minus 312057812)2

+ (312057821minus 12057803)2

1198684= (12057830+ 12057812)2

+ (12057821+ 12057803)2

1198685= (12057830minus 312057812) (12057830+ 12057812)

times [(12057830+ 12057812)2

minus 3(12057821+ 12057803)2

]

+ (312057821minus 12057803) (12057821+ 12057803)

times [3(12057830+ 12057812)2

minus (12057821+ 12057803)2

]

1198686= (12057820minus 12057802)

times [(12057830+ 12057812)2

minus (12057821+ 12057803)2

]

+ 412057811(12057830+ 12057812) (12057821+ 12057803)

1198687= (312057821minus 12057803) (12057830+ 12057812)

times [(12057830+ 12057812)2

minus 3(12057821+ 12057803)2

]

minus (12057830minus 312057812) (12057821+ 12057803)

times [3(12057830+ 12057812)2

minus (12057821+ 12057803)2

]

1198688= 12057811[(12057830+ 12057812)2

minus (12057803+ 12057821)2

]

minus (12057820minus 12057802) (12057830+ 12057812) (12057803+ 12057821)

(25)

Every one of the rotation invariant moments extracts a char-acteristic attribute of the image For example 119868

1represents

the moment of inertia along the centroid while 1198687extracts

skew invariant properties which are useful in differentiatingbetween images which are mirror reflections of each other[10 15ndash17]

7 Clustering for Classification

By now the iris image has been segmented and transformedinto a rectangular canvas All described moments are appliedand a feature vector is extracted namely 119868 This vectorcontains translation scale and rotation invariant and orien-tation related moments This vector corresponds to variousfeatures of the image hence it is used for classification Anunsupervised approach is adopted for classification using k-means clustering algorithm Using a set of multidimensionalobservations (119909

1 1199092 119909

119899) the k-means algorithm parti-

tions the observations into119870 sets such that119870 le 119899 generatingthe set 119878 = 119878

1 1198782 119878

119896so as to minimize the following

objective function

arg119904

min119870

sum119894=1

sum119909119895isin119878119894

10038161003816100381610038161003816119909119895minus 120583119894

10038161003816100381610038161003816

2

(26)

where 120583119894is mean of all the observations in 119878

119894 Initially

extracted moment vectors for an iris image sample areconsidered to be the initial mean

The k-means Algorithm has two major steps namely theassignment step and the update step The mean is used toassign each observation to a cluster in the assignment stepAn observation is assigned a cluster whose mean makes theclosestmatch Formally this step generated the set 119878

119894such that

119878119894= 1199091199031003816100381610038161003816119909119903 minus 119898119894

1003816100381610038161003816 le10038161003816100381610038161003816119909119903minus 119898119895

10038161003816100381610038161003816forall1 le 119895 le 119870 (27)

Also an observation 119909119903should be associated with exactly

one 119878119894even if two or more differences are found comparable

The next step is based on the identification of a cluster for theobservation established in the previous stepThemean for thecluster is recalculated as the centroid of the observations asgiven in following equation

119898119894=

110038161003816100381610038161198781198941003816100381610038161003816sum119909119895isin119878119894

119909119895 (28)

Both steps are iterated and the centroid is readjusted Thisprocess continues until there is no appreciable change in themeans At this stage themeans have converged and no furthertraining is required [18 19]

8 Results

TheCASIA database containing thousands of images belong-ing to hundreds of different people is used to gather testresults Nearly one-fourth of the iris images from each classare retained as test case while the rest are used for trainingThe distorted images within the database are rejected Irisportion of the image is marked out using the segmentationalgorithm and is later transformed into a rectangular canvasFurther the grey scale rectangular canvas of iris is used

The Scientific World Journal 7

15

10

5

00 1 2 3 4 5 6 7 8 9

times104

times104

(a)

0 1 2 3 4 5 6 7 8 9

times104

times104

10

14

12

10

8

6

4

2

0

(b)

Figure 4 Each of (a) and (b) shows different clusters Notice that all the clusters are linearly separable and can be distinguished by theirEuclidean distance from the Centroid

to compute image moment vector This vector containsinformationwhich is translation scale and rotation invariantand provides orientation information Using the k-meansalgorithm each image is assigned to a cluster The k-meansalgorithm is iterated until convergence is achieved and thecentroid of each cluster is determined Once the system isfully trained it is ready to accept an arbitrary input andprovide a match The model responds with the correlation ofan arbitrary image moments vector to a cluster if the imagebelongs to a known class In Figure 4 various clusters formedare depicted it also shows how the class of a sample is dis-tinguished based upon the Euclidean distance of the featurevector of the sample from the centroid of an arbitrary clusterMoreover Figure 5 shows a confusion matrix depicting theaccuracy of the model The confusion matrix shows that theaccuracy of the model for certain arbitrary classes is 990while the overall accuracy of the model for all the images inthe database is estimated to be 985Moreover it also reportsthe level of confidence of match based on Euclidean distanceof the sample from the centroid of the identified cluster Level0 is the highest which means that the Euclidean distance ofthe sample from the centroid of the cluster is low and level4 is the lowest which indicates that the Euclidean distance ofthe sample from the centroid of any cluster does not lie withina stipulated threshold to confidently indicate a match Figure4 shows the clusters formed using the k-means algorithm

Furthermore a number of experimentswere carried out todetermine the accuracy and efficiency of the proposed modelin comparison with other competitive models In [20] theauthors present a technique which extracts the features ofiris using fine-to-coarse approximation at different resolutionlevels determined through discrete dyadic wavelet transformzero crossing representation The resultant one-dimensionalfeature vector is used to find a match by computing variousdistances with arbitrary feature vectors Ma et al presentyet another iris recognition technique using Gabor filters

[1 6] The authors use a bank of Gabor filters to extract afixed length feature vector signifying the global and localcharacteristics of the iris image A match is established bycomputing the weighted Euclidean distance between featurevectors of arbitrary iris images Daugman in [9] relies onthe morphogenetic randomness of texture in the trabecularmeshwork of iris A failure of statistical independence teston two coded patterns from same iris indicates a matchThis method extracts the visible texture of iris from a realtime video image The image is later encoded into a com-pact sequence of multiscale quadrature 2D Gabor waveletcoefficients The most significant 256 bytes form the iriscode An exclusive OR operation is performed to generate adecision All the above-mentioned techniques including theproposed technique are executed in order to obtain resultsThe genuine acceptance rate (GAR) and false acceptance rate(FAR) are observed for each technique A receiver operatingcharacteristics (ROC) distribution is plotted for each tech-nique based on the results as shown in Figure 6 The ROCdistribution comparatively highlights the accuracy alongwith the frequency of occurrence of errors of the proposedand other current state-of-art models The following sectionbriefly provides some discussion about the proposed systemalong with an interpretation of the ROC distribution formed

9 Conclusion

Through analysis of data obtained after moments extractiona number of conclusions are inferred Images of a certainiris differing in orientation yielded varying eigenvalues andeccentricity However a change in orientation of an imagebarely affects the values of rotation invariant moments whileraw and scale invariant moments are affected Change inorientation of an image affects the Euclidean distance ofthe moment vectors from the centroid Despite this therestill remains a great probability of the image to be classified

8 The Scientific World Journal

0

000

000

000

000

000

000

000

000

000

000

00

0

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

00

0

000

00

0

000

000

00

0

000

00

0

00

0

000

00

0

000

000

000

000

00

0

00

0

00

0

000

000

000

000

000

000

000

000

000

00

0

000

00

0

00

0

00

0

00

0

000

000

000

000

000

00

0

000

000

000

000

000

000

00

0

000

00

0

000

00

0

000

000

000

000

00

0

000

000

000

000

000

000

000

00

0

00

0

00

0

000

000

000

00

0

000

00

0

000

000

000

000

00

0

000

000

000

000

00

0

000

000

000

000

00

0

000

001

051

050

00

0

000

00

0

00

0

000

00

17

83

17

8317

83

17

83

18

88

16

78

16

78

19

93

2098

15

74

15

741574

100 100 100 100 100 100 100 10000

10000

100001000010000

10000

1000010000

10000

1000010000

00 00 00 00 00 00 0010000

10000

944

56938

63

990

10

941

950

50

59

1

2

3

4

5

6

7

8

9

10

11

12

1 2 3 4 5 6 7 8 9 10 11 12

Out

put c

lass

Confusion matrix

Target class

Figure 5 The figure shows the confusion matrix for some arbitrary classes while the accuracy of the model for these classes is 990

1005

1

0995

099

0985

098

0975

097

09650001 001 01 1

AvilaLi MaProposedDaugman

Figure 6The figure illustrates the receiver operating characteristicsdistributions for different competitive techniques including theproposed one

correctly because of coherence in scale invariant momentsAlthough the model exhibits scale and rotation invariantattributes but some impairment is offered by luminosity ofthe image Two arbitrary images of the same objects yieldcomparable moments if the luminosity is the same but theymay yield differing moments in case luminosity is altered Inthe underlying research work it is assumed that the luminos-ity level will be the same for all the images as each imageis obtained by an iriscope working in similar conditionsThe model provides resilience towards variation of scale androtation as compared to other techniques which requires

coherence of phase and size The model can be furtherimproved by incorporation of a technique that will processeach image to provide uniform luminosity Furthermore theROC distribution obtained (shown in Figure 6) from all thetest cases shows that the performance of proposed model iscomparable with Daugman method while it yields a betterperformance than the methods described in [1 6 20]

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] L Ma T Tan YWang and D Zhang ldquoEfficient iris recognitionby characterizing key local variationsrdquo IEEE Transactions onImage Processing vol 13 no 6 pp 739ndash750 2004

[2] D M Monro S Rakshit and D Zhang ldquoDCT-bsed iris recog-nitionrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 29 no 4 pp 586ndash595 2007

[3] K Roy P Bhattacharya and R C Debnath ldquoMulti-class SVMbased iris recognitionrdquo in Proceedings of the 10th InternationalConference on Computer and Information Technology (ICCITrsquo07) pp 1ndash6 Dhaka Bangladesh December 2007

[4] R H Abiyev and K Altunkaya ldquoPersonal iris recognitionusing neural networkrdquo International Journal of Security and ItsApplications vol 2 no 2 pp 41ndash50 2008

The Scientific World Journal 9

[5] Y D Khan F Ahmad and M W Anwar ldquoA neuro-cognitiveapproach for iris recognition using back propagationrdquo WorldApplied Sciences Journal vol 16 no 5 pp 678ndash685 2012

[6] L Ma Y Wang and T Tan ldquoIris recognition based on multi-channel Gabor filteringrdquo in Proceedings of the 5th Asian Con-ference on Computer Vision pp 279ndash283 Melbourne Australia2002

[7] W W Boles and B Boashash ldquoA human identification tech-nique using images of the iris and wavelet transformrdquo IEEETransactions on Signal Processing vol 46 no 4 pp 1185ndash11881998

[8] K Lee S Lim O Byeon and T Kim ldquoEfficient iris recognitionthrough improvement of feature vector and classifierrdquo ETRIJournal vol 23 no 2 pp 61ndash70 2001

[9] J Daugman ldquoHow iris recognition worksrdquo IEEE Transactionson Circuits and Systems for Video Technology vol 14 no 1 pp21ndash30 2004

[10] M-K Hu ldquoVisual pattern recognition by moment invariantsrdquoIRE Tranactions on InformationTheory vol 8 no 2 pp 179ndash1871962

[11] E Arias-Castro and D L Donoho ldquoDoes median filtering trulypreserve edges better than linear filteringrdquo Annals of Statisticsvol 37 no 3 pp 1172ndash1206 2009

[12] J Canny ldquoA computational approach to edge detectionrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol8 no 6 pp 679ndash698 1986

[13] N Kanopoulos N Vasanthavada and R L Baker ldquoDesign ofan image edge detection filter using the Sobel operatorrdquo IEEEJournal of Solid-State Circuits vol 23 no 2 pp 358ndash367 1988

[14] JD FoleyComputerGraphics Principles andPractice Addison-Wesley Reading Mass USA 1996

[15] S O Belkasim M Shridhar and M Ahmadi ldquoPattern recog-nition with moment invariants a comparative study and newresultsrdquo Pattern Recognition vol 24 no 12 pp 1117ndash1138 1991

[16] J Flusser T Suk and B Zitova Moments and Moment Invari-ants in Pattern Recognition JohnWiley amp Sons Chichester UK2009

[17] W T Freeman D B Anderson P A Beardsley et al ldquoCom-puter vision for interactive computer graphicsrdquo IEEE ComputerGraphics and Applications vol 18 no 3 pp 42ndash52 1998

[18] D Aloise A Deshpande P Hansen and P Popat ldquoNP-hardnessof Euclidean sum-of-squares clusteringrdquoMachine Learning vol75 no 2 pp 245ndash248 2009

[19] J A Hartigan and M A Wong ldquoAlgorithm AS 136 a k-meansclustering algorithmrdquo Journal of the Royal Statistical Society CApplied Statistics vol 28 no 1 pp 100ndash108 1979

[20] C Sanchez-Avila and R Sanchez-Reillo ldquoTwo different ap-proaches for iris recognition using Gabor filters and multi-scale zero-crossing representationrdquo Pattern Recognition vol 38no 2 pp 231ndash240 2005

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 4: Research Article Iris Recognition Using Image Moments and ...downloads.hindawi.com/journals/tswj/2014/723595.pdf · Image moments and method of computation of moments are described

4 The Scientific World Journal

(a) (b)

Figure 3 The figure illustrates iris images before and after radial to rectangular transformation

incrementally while the starting pixel coordinates are (1199091015840 1199101015840)and the coordinates of the endpoint pixel are (119909 119910) Basedupon the value of 119909-coordinate of previous pixel the value of119909-coordinate for next pixel is calculated by incrementing theprevious value of 119909 This value of 119909-coordinate say 119909

119898 is put

into the following equation

119910119898=119910 minus 1199101015840

119909 minus 1199091015840(119909119898minus 1199091015840) + 119910 (8)

which yields the corresponding 119910-coordinate

5 Optimizing the Algorithm

The determination of points along a line is further optimizedby the use of the midpoint method [14] The computationsrequired to yield a point along a line are reduced to mereaddition of a small incremental value The gradient 119898 iscomputed for this purpose as given in the following equation

119898 =119910 minus 1199101015840

119909 minus 1199091015840 (9)

Let

119889119910 = 119910 minus 1199101015840

119889119909 = 119909 minus 1199091015840

(10)

where the line end points are (1199091015840 1199101015840) and (119909 119910) In accordancewith the midpoint method for straight line with tangentbetween 0 and 1 the value ofΔ119864 = 2(119889119910minus119889119909) andΔ119873119864 = 2119889119910Initially the control variable 119889 = 2119889119910 minus 119889119909 If the value of119889 is positive then East Pixel is chosen and if it is negativethen North East Pixel is chosen At each step 119889 is updatedby adding Δ119864 or Δ119873119864 accordingly [5 14]

6 Pattern Recognition Using Image Moments

A perceptive action performed on intricate structures needsto quantify its attributes The state of any structure is

quantifiable into data Diversification of this data representsinteraction or changes in the state All such quantificationmethods generate finite data Data by itself is insignificantbut the information implantedwithin the data is useful Infor-mation is either extracted directly from the data itself or fromthe patterns formed by the arrangement of data Researchershave devised various models for extracting information fromdata embedded in an image Applications based on suchmodels do not add to the contents of data rather they findhidden data patterns in order to extract interesting and usefulinformation A probability density can be formed for any datasetTheparameters of the probability density function informus about the general manner in which data is distributedMoments are the characteristics of the probability densityfunction which are based on the kurtosis and skewedness ofthe probability density function Imagemoments describe theproperties of a distribution formed using the pixel data ofthe image along its axes The moments are typically chosento depict a certain interesting property of the image Suchmoment proves beneficial in extracting and summarizing theproperties of the image in order to produce useful resultsProperties of an image such as centroid area and orientationare quantified by this process Another dividend of imagemoments is that they bring together the local and globalgeometric details of a grayscale image [15]

61 Extracting Moments from an Image An image in thereal world is modeled using a Cartesian distribution function119891(119909 119910) in its analog form This function is used to providemoments of the order of (119901 + 119902) over the image plane P andis generalized as

119872119901119902

= intintP120595119901119902(119909 119910) sdot 119891 (119909 119910) 119889119909 119889119910 119901 119902 = 0 1 2 infin

(11)

where 120595119901119902

is the basis function and Ρ is the image planeEquation (11) yields a weighted average over the plane P

The Scientific World Journal 5

The basis function is designed such that it represents someinvariant features of the image Furthermore the propertiesof the basis function are passed onto moments An image isof discrete nature thus it is divided into pixels each having adiscrete intensity level Equation (11) is adopted for the digitalimage as

119872119901119902= sum119909

sum119910

120595119901119902(119909 119910) sdot 119868 (119909 119910) 119901 119902 = 0 1 2 infin

(12)

where 119868(119909 119910) is the intensity of a pixel in the digital image atthe 119909th row and 119910th column

In [10 16] the authors prove that the two-dimensionalcontinuous (119901 + 119902)th order moments are defined using theintegral

119872119901119902= ∬infin

minusinfin

119909119901119910119902119868 (119909 119910) 119889119909 119889119910 (13)

where 119891(119909 119910) lies within some finite region of the 119909119910

plane In case of digital image the integrals are replaced bysummations which is formulated as

119872119901119902=

119870

sum119909=1

119871

sum119910=1

119909119901119910119902119868 (119909 119910) (14)

where 119909119901119910119902 is the basis function119870 and 119871 are the dimensionsof the image and the 119872

119901119902is the cartesian moment for

the two-dimensional image Note that this basis function ishighly correlated that is nonorthogonal The moment 119872

00

represents the image whereas the first order moments areused to find the center of themass or the centroid of the imageis given as

119909 =11987210

11987200

119910 =11987201

11987200

(15)

where (119909 119910) is the centroid

62 Centralized Moments Once the centroid is determinedit is used to compute the centralized moments In [15] thecentral moments for two-dimensional data are given as

120583119901119902= sum119909

sum119910

(119909 minus 119909)119901(119910 minus 119910)

119902

119868 (119909 119910) (16)

where120583119901119902are the centralmomentsNote that these are similar

to Cartesian moments translated to the centroidThis depictsthe translation invariant property of the centralizedmomentswhich are always akin to the centroid of the segmented object

Further simplification of (16) up to order 3 generates thefollowing moments

12058300= 11987200

12058301= 0

12058310= 0

12058311= 11987211minus 119909119872

01= 11987211minus 119910119872

10

12058320= 11987220minus 119909119872

10

12058302= 11987202minus 119910119872

01

12058321= 11987221minus 2119909119872

11minus 119910119872

20+ 2119909211987201

12058312= 11987212minus 2119910119872

11minus 119909119872

02+ 2119910211987210

12058330= 11987230minus 3119909119872

20+ 2119909211987210

12058303= 11987203minus 3119910119872

02+ 2119910211987201

(17)

63 Scale Invariant Moments These moments are furthermade scale invariant as explained in [4 5] and are given as

120578119901119902=120583119901119902

120583120574

00

(18)

where 120578119901119902are scale normalized central moments and 120574 = (119901+

119902)2 + 1 where (119901 + 119902) ge 2

64 Image Orientation The second order central momentscontain information about the orientation of the imageUsingthese moments a covariance matrix is derived Let

1205831015840

20=12058320

12058300

=11987220

11987200

minus 1199092

1205831015840

02=12058302

12058300

=11987202

11987200

minus 1199102

1205831015840

11=12058311

12058300

=11987211

11987200

minus 119909119910

(19)

and then the covariance matrix is given as

cov [119868 (119909 119910)] = (1205831015840

20120583101584011

120583101584011

120583101584002

) (20)

Themajor andminor axes of the image intensity correlatewith the eigenvectors of the given covariance matrix Theorientation of the image is described by the eigenvector ofthe highest eigenvalue In [4] it is shown that the angle Θ iscomputed by the following equation

Θ =1

2tanminus1 (

2120583101584011

120583101584020minus 120583101584002

) (21)

where

1205831015840= 0 (22)

6 The Scientific World Journal

Using (20) the eigenvalues of the covariance matrix are easilyobtained and are given as

120582119894=120583101584020+ 120583101584002

2plusmnradic41205831015840211+ (120583101584020minus 120583101584002)2

2

(23)

Notice that these values are proportional to the square ofthe length of the eigenvector axes The difference betweenthe eigenvalues marks yet another important characteristicIt shows how elongated the image is This property is termedeccentricity and is computed as

radic1 minus1205822

1205821

(24)

65 Rotation Invariant Moments Previously we have dis-cussed translation and scale invariant moments In [16]rotation invariant moments are derived which are usuallytermed as a Hu set of invariant moments These are given asfollows

1198681= 12057820+ 12057802

1198682= (12057820minus 12057802)2

+ (212057811)2

1198683= (12057830minus 312057812)2

+ (312057821minus 12057803)2

1198684= (12057830+ 12057812)2

+ (12057821+ 12057803)2

1198685= (12057830minus 312057812) (12057830+ 12057812)

times [(12057830+ 12057812)2

minus 3(12057821+ 12057803)2

]

+ (312057821minus 12057803) (12057821+ 12057803)

times [3(12057830+ 12057812)2

minus (12057821+ 12057803)2

]

1198686= (12057820minus 12057802)

times [(12057830+ 12057812)2

minus (12057821+ 12057803)2

]

+ 412057811(12057830+ 12057812) (12057821+ 12057803)

1198687= (312057821minus 12057803) (12057830+ 12057812)

times [(12057830+ 12057812)2

minus 3(12057821+ 12057803)2

]

minus (12057830minus 312057812) (12057821+ 12057803)

times [3(12057830+ 12057812)2

minus (12057821+ 12057803)2

]

1198688= 12057811[(12057830+ 12057812)2

minus (12057803+ 12057821)2

]

minus (12057820minus 12057802) (12057830+ 12057812) (12057803+ 12057821)

(25)

Every one of the rotation invariant moments extracts a char-acteristic attribute of the image For example 119868

1represents

the moment of inertia along the centroid while 1198687extracts

skew invariant properties which are useful in differentiatingbetween images which are mirror reflections of each other[10 15ndash17]

7 Clustering for Classification

By now the iris image has been segmented and transformedinto a rectangular canvas All described moments are appliedand a feature vector is extracted namely 119868 This vectorcontains translation scale and rotation invariant and orien-tation related moments This vector corresponds to variousfeatures of the image hence it is used for classification Anunsupervised approach is adopted for classification using k-means clustering algorithm Using a set of multidimensionalobservations (119909

1 1199092 119909

119899) the k-means algorithm parti-

tions the observations into119870 sets such that119870 le 119899 generatingthe set 119878 = 119878

1 1198782 119878

119896so as to minimize the following

objective function

arg119904

min119870

sum119894=1

sum119909119895isin119878119894

10038161003816100381610038161003816119909119895minus 120583119894

10038161003816100381610038161003816

2

(26)

where 120583119894is mean of all the observations in 119878

119894 Initially

extracted moment vectors for an iris image sample areconsidered to be the initial mean

The k-means Algorithm has two major steps namely theassignment step and the update step The mean is used toassign each observation to a cluster in the assignment stepAn observation is assigned a cluster whose mean makes theclosestmatch Formally this step generated the set 119878

119894such that

119878119894= 1199091199031003816100381610038161003816119909119903 minus 119898119894

1003816100381610038161003816 le10038161003816100381610038161003816119909119903minus 119898119895

10038161003816100381610038161003816forall1 le 119895 le 119870 (27)

Also an observation 119909119903should be associated with exactly

one 119878119894even if two or more differences are found comparable

The next step is based on the identification of a cluster for theobservation established in the previous stepThemean for thecluster is recalculated as the centroid of the observations asgiven in following equation

119898119894=

110038161003816100381610038161198781198941003816100381610038161003816sum119909119895isin119878119894

119909119895 (28)

Both steps are iterated and the centroid is readjusted Thisprocess continues until there is no appreciable change in themeans At this stage themeans have converged and no furthertraining is required [18 19]

8 Results

TheCASIA database containing thousands of images belong-ing to hundreds of different people is used to gather testresults Nearly one-fourth of the iris images from each classare retained as test case while the rest are used for trainingThe distorted images within the database are rejected Irisportion of the image is marked out using the segmentationalgorithm and is later transformed into a rectangular canvasFurther the grey scale rectangular canvas of iris is used

The Scientific World Journal 7

15

10

5

00 1 2 3 4 5 6 7 8 9

times104

times104

(a)

0 1 2 3 4 5 6 7 8 9

times104

times104

10

14

12

10

8

6

4

2

0

(b)

Figure 4 Each of (a) and (b) shows different clusters Notice that all the clusters are linearly separable and can be distinguished by theirEuclidean distance from the Centroid

to compute image moment vector This vector containsinformationwhich is translation scale and rotation invariantand provides orientation information Using the k-meansalgorithm each image is assigned to a cluster The k-meansalgorithm is iterated until convergence is achieved and thecentroid of each cluster is determined Once the system isfully trained it is ready to accept an arbitrary input andprovide a match The model responds with the correlation ofan arbitrary image moments vector to a cluster if the imagebelongs to a known class In Figure 4 various clusters formedare depicted it also shows how the class of a sample is dis-tinguished based upon the Euclidean distance of the featurevector of the sample from the centroid of an arbitrary clusterMoreover Figure 5 shows a confusion matrix depicting theaccuracy of the model The confusion matrix shows that theaccuracy of the model for certain arbitrary classes is 990while the overall accuracy of the model for all the images inthe database is estimated to be 985Moreover it also reportsthe level of confidence of match based on Euclidean distanceof the sample from the centroid of the identified cluster Level0 is the highest which means that the Euclidean distance ofthe sample from the centroid of the cluster is low and level4 is the lowest which indicates that the Euclidean distance ofthe sample from the centroid of any cluster does not lie withina stipulated threshold to confidently indicate a match Figure4 shows the clusters formed using the k-means algorithm

Furthermore a number of experimentswere carried out todetermine the accuracy and efficiency of the proposed modelin comparison with other competitive models In [20] theauthors present a technique which extracts the features ofiris using fine-to-coarse approximation at different resolutionlevels determined through discrete dyadic wavelet transformzero crossing representation The resultant one-dimensionalfeature vector is used to find a match by computing variousdistances with arbitrary feature vectors Ma et al presentyet another iris recognition technique using Gabor filters

[1 6] The authors use a bank of Gabor filters to extract afixed length feature vector signifying the global and localcharacteristics of the iris image A match is established bycomputing the weighted Euclidean distance between featurevectors of arbitrary iris images Daugman in [9] relies onthe morphogenetic randomness of texture in the trabecularmeshwork of iris A failure of statistical independence teston two coded patterns from same iris indicates a matchThis method extracts the visible texture of iris from a realtime video image The image is later encoded into a com-pact sequence of multiscale quadrature 2D Gabor waveletcoefficients The most significant 256 bytes form the iriscode An exclusive OR operation is performed to generate adecision All the above-mentioned techniques including theproposed technique are executed in order to obtain resultsThe genuine acceptance rate (GAR) and false acceptance rate(FAR) are observed for each technique A receiver operatingcharacteristics (ROC) distribution is plotted for each tech-nique based on the results as shown in Figure 6 The ROCdistribution comparatively highlights the accuracy alongwith the frequency of occurrence of errors of the proposedand other current state-of-art models The following sectionbriefly provides some discussion about the proposed systemalong with an interpretation of the ROC distribution formed

9 Conclusion

Through analysis of data obtained after moments extractiona number of conclusions are inferred Images of a certainiris differing in orientation yielded varying eigenvalues andeccentricity However a change in orientation of an imagebarely affects the values of rotation invariant moments whileraw and scale invariant moments are affected Change inorientation of an image affects the Euclidean distance ofthe moment vectors from the centroid Despite this therestill remains a great probability of the image to be classified

8 The Scientific World Journal

0

000

000

000

000

000

000

000

000

000

000

00

0

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

00

0

000

00

0

000

000

00

0

000

00

0

00

0

000

00

0

000

000

000

000

00

0

00

0

00

0

000

000

000

000

000

000

000

000

000

00

0

000

00

0

00

0

00

0

00

0

000

000

000

000

000

00

0

000

000

000

000

000

000

00

0

000

00

0

000

00

0

000

000

000

000

00

0

000

000

000

000

000

000

000

00

0

00

0

00

0

000

000

000

00

0

000

00

0

000

000

000

000

00

0

000

000

000

000

00

0

000

000

000

000

00

0

000

001

051

050

00

0

000

00

0

00

0

000

00

17

83

17

8317

83

17

83

18

88

16

78

16

78

19

93

2098

15

74

15

741574

100 100 100 100 100 100 100 10000

10000

100001000010000

10000

1000010000

10000

1000010000

00 00 00 00 00 00 0010000

10000

944

56938

63

990

10

941

950

50

59

1

2

3

4

5

6

7

8

9

10

11

12

1 2 3 4 5 6 7 8 9 10 11 12

Out

put c

lass

Confusion matrix

Target class

Figure 5 The figure shows the confusion matrix for some arbitrary classes while the accuracy of the model for these classes is 990

1005

1

0995

099

0985

098

0975

097

09650001 001 01 1

AvilaLi MaProposedDaugman

Figure 6The figure illustrates the receiver operating characteristicsdistributions for different competitive techniques including theproposed one

correctly because of coherence in scale invariant momentsAlthough the model exhibits scale and rotation invariantattributes but some impairment is offered by luminosity ofthe image Two arbitrary images of the same objects yieldcomparable moments if the luminosity is the same but theymay yield differing moments in case luminosity is altered Inthe underlying research work it is assumed that the luminos-ity level will be the same for all the images as each imageis obtained by an iriscope working in similar conditionsThe model provides resilience towards variation of scale androtation as compared to other techniques which requires

coherence of phase and size The model can be furtherimproved by incorporation of a technique that will processeach image to provide uniform luminosity Furthermore theROC distribution obtained (shown in Figure 6) from all thetest cases shows that the performance of proposed model iscomparable with Daugman method while it yields a betterperformance than the methods described in [1 6 20]

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] L Ma T Tan YWang and D Zhang ldquoEfficient iris recognitionby characterizing key local variationsrdquo IEEE Transactions onImage Processing vol 13 no 6 pp 739ndash750 2004

[2] D M Monro S Rakshit and D Zhang ldquoDCT-bsed iris recog-nitionrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 29 no 4 pp 586ndash595 2007

[3] K Roy P Bhattacharya and R C Debnath ldquoMulti-class SVMbased iris recognitionrdquo in Proceedings of the 10th InternationalConference on Computer and Information Technology (ICCITrsquo07) pp 1ndash6 Dhaka Bangladesh December 2007

[4] R H Abiyev and K Altunkaya ldquoPersonal iris recognitionusing neural networkrdquo International Journal of Security and ItsApplications vol 2 no 2 pp 41ndash50 2008

The Scientific World Journal 9

[5] Y D Khan F Ahmad and M W Anwar ldquoA neuro-cognitiveapproach for iris recognition using back propagationrdquo WorldApplied Sciences Journal vol 16 no 5 pp 678ndash685 2012

[6] L Ma Y Wang and T Tan ldquoIris recognition based on multi-channel Gabor filteringrdquo in Proceedings of the 5th Asian Con-ference on Computer Vision pp 279ndash283 Melbourne Australia2002

[7] W W Boles and B Boashash ldquoA human identification tech-nique using images of the iris and wavelet transformrdquo IEEETransactions on Signal Processing vol 46 no 4 pp 1185ndash11881998

[8] K Lee S Lim O Byeon and T Kim ldquoEfficient iris recognitionthrough improvement of feature vector and classifierrdquo ETRIJournal vol 23 no 2 pp 61ndash70 2001

[9] J Daugman ldquoHow iris recognition worksrdquo IEEE Transactionson Circuits and Systems for Video Technology vol 14 no 1 pp21ndash30 2004

[10] M-K Hu ldquoVisual pattern recognition by moment invariantsrdquoIRE Tranactions on InformationTheory vol 8 no 2 pp 179ndash1871962

[11] E Arias-Castro and D L Donoho ldquoDoes median filtering trulypreserve edges better than linear filteringrdquo Annals of Statisticsvol 37 no 3 pp 1172ndash1206 2009

[12] J Canny ldquoA computational approach to edge detectionrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol8 no 6 pp 679ndash698 1986

[13] N Kanopoulos N Vasanthavada and R L Baker ldquoDesign ofan image edge detection filter using the Sobel operatorrdquo IEEEJournal of Solid-State Circuits vol 23 no 2 pp 358ndash367 1988

[14] JD FoleyComputerGraphics Principles andPractice Addison-Wesley Reading Mass USA 1996

[15] S O Belkasim M Shridhar and M Ahmadi ldquoPattern recog-nition with moment invariants a comparative study and newresultsrdquo Pattern Recognition vol 24 no 12 pp 1117ndash1138 1991

[16] J Flusser T Suk and B Zitova Moments and Moment Invari-ants in Pattern Recognition JohnWiley amp Sons Chichester UK2009

[17] W T Freeman D B Anderson P A Beardsley et al ldquoCom-puter vision for interactive computer graphicsrdquo IEEE ComputerGraphics and Applications vol 18 no 3 pp 42ndash52 1998

[18] D Aloise A Deshpande P Hansen and P Popat ldquoNP-hardnessof Euclidean sum-of-squares clusteringrdquoMachine Learning vol75 no 2 pp 245ndash248 2009

[19] J A Hartigan and M A Wong ldquoAlgorithm AS 136 a k-meansclustering algorithmrdquo Journal of the Royal Statistical Society CApplied Statistics vol 28 no 1 pp 100ndash108 1979

[20] C Sanchez-Avila and R Sanchez-Reillo ldquoTwo different ap-proaches for iris recognition using Gabor filters and multi-scale zero-crossing representationrdquo Pattern Recognition vol 38no 2 pp 231ndash240 2005

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 5: Research Article Iris Recognition Using Image Moments and ...downloads.hindawi.com/journals/tswj/2014/723595.pdf · Image moments and method of computation of moments are described

The Scientific World Journal 5

The basis function is designed such that it represents someinvariant features of the image Furthermore the propertiesof the basis function are passed onto moments An image isof discrete nature thus it is divided into pixels each having adiscrete intensity level Equation (11) is adopted for the digitalimage as

119872119901119902= sum119909

sum119910

120595119901119902(119909 119910) sdot 119868 (119909 119910) 119901 119902 = 0 1 2 infin

(12)

where 119868(119909 119910) is the intensity of a pixel in the digital image atthe 119909th row and 119910th column

In [10 16] the authors prove that the two-dimensionalcontinuous (119901 + 119902)th order moments are defined using theintegral

119872119901119902= ∬infin

minusinfin

119909119901119910119902119868 (119909 119910) 119889119909 119889119910 (13)

where 119891(119909 119910) lies within some finite region of the 119909119910

plane In case of digital image the integrals are replaced bysummations which is formulated as

119872119901119902=

119870

sum119909=1

119871

sum119910=1

119909119901119910119902119868 (119909 119910) (14)

where 119909119901119910119902 is the basis function119870 and 119871 are the dimensionsof the image and the 119872

119901119902is the cartesian moment for

the two-dimensional image Note that this basis function ishighly correlated that is nonorthogonal The moment 119872

00

represents the image whereas the first order moments areused to find the center of themass or the centroid of the imageis given as

119909 =11987210

11987200

119910 =11987201

11987200

(15)

where (119909 119910) is the centroid

62 Centralized Moments Once the centroid is determinedit is used to compute the centralized moments In [15] thecentral moments for two-dimensional data are given as

120583119901119902= sum119909

sum119910

(119909 minus 119909)119901(119910 minus 119910)

119902

119868 (119909 119910) (16)

where120583119901119902are the centralmomentsNote that these are similar

to Cartesian moments translated to the centroidThis depictsthe translation invariant property of the centralizedmomentswhich are always akin to the centroid of the segmented object

Further simplification of (16) up to order 3 generates thefollowing moments

12058300= 11987200

12058301= 0

12058310= 0

12058311= 11987211minus 119909119872

01= 11987211minus 119910119872

10

12058320= 11987220minus 119909119872

10

12058302= 11987202minus 119910119872

01

12058321= 11987221minus 2119909119872

11minus 119910119872

20+ 2119909211987201

12058312= 11987212minus 2119910119872

11minus 119909119872

02+ 2119910211987210

12058330= 11987230minus 3119909119872

20+ 2119909211987210

12058303= 11987203minus 3119910119872

02+ 2119910211987201

(17)

63 Scale Invariant Moments These moments are furthermade scale invariant as explained in [4 5] and are given as

120578119901119902=120583119901119902

120583120574

00

(18)

where 120578119901119902are scale normalized central moments and 120574 = (119901+

119902)2 + 1 where (119901 + 119902) ge 2

64 Image Orientation The second order central momentscontain information about the orientation of the imageUsingthese moments a covariance matrix is derived Let

1205831015840

20=12058320

12058300

=11987220

11987200

minus 1199092

1205831015840

02=12058302

12058300

=11987202

11987200

minus 1199102

1205831015840

11=12058311

12058300

=11987211

11987200

minus 119909119910

(19)

and then the covariance matrix is given as

cov [119868 (119909 119910)] = (1205831015840

20120583101584011

120583101584011

120583101584002

) (20)

Themajor andminor axes of the image intensity correlatewith the eigenvectors of the given covariance matrix Theorientation of the image is described by the eigenvector ofthe highest eigenvalue In [4] it is shown that the angle Θ iscomputed by the following equation

Θ =1

2tanminus1 (

2120583101584011

120583101584020minus 120583101584002

) (21)

where

1205831015840= 0 (22)

6 The Scientific World Journal

Using (20) the eigenvalues of the covariance matrix are easilyobtained and are given as

120582119894=120583101584020+ 120583101584002

2plusmnradic41205831015840211+ (120583101584020minus 120583101584002)2

2

(23)

Notice that these values are proportional to the square ofthe length of the eigenvector axes The difference betweenthe eigenvalues marks yet another important characteristicIt shows how elongated the image is This property is termedeccentricity and is computed as

radic1 minus1205822

1205821

(24)

65 Rotation Invariant Moments Previously we have dis-cussed translation and scale invariant moments In [16]rotation invariant moments are derived which are usuallytermed as a Hu set of invariant moments These are given asfollows

1198681= 12057820+ 12057802

1198682= (12057820minus 12057802)2

+ (212057811)2

1198683= (12057830minus 312057812)2

+ (312057821minus 12057803)2

1198684= (12057830+ 12057812)2

+ (12057821+ 12057803)2

1198685= (12057830minus 312057812) (12057830+ 12057812)

times [(12057830+ 12057812)2

minus 3(12057821+ 12057803)2

]

+ (312057821minus 12057803) (12057821+ 12057803)

times [3(12057830+ 12057812)2

minus (12057821+ 12057803)2

]

1198686= (12057820minus 12057802)

times [(12057830+ 12057812)2

minus (12057821+ 12057803)2

]

+ 412057811(12057830+ 12057812) (12057821+ 12057803)

1198687= (312057821minus 12057803) (12057830+ 12057812)

times [(12057830+ 12057812)2

minus 3(12057821+ 12057803)2

]

minus (12057830minus 312057812) (12057821+ 12057803)

times [3(12057830+ 12057812)2

minus (12057821+ 12057803)2

]

1198688= 12057811[(12057830+ 12057812)2

minus (12057803+ 12057821)2

]

minus (12057820minus 12057802) (12057830+ 12057812) (12057803+ 12057821)

(25)

Every one of the rotation invariant moments extracts a char-acteristic attribute of the image For example 119868

1represents

the moment of inertia along the centroid while 1198687extracts

skew invariant properties which are useful in differentiatingbetween images which are mirror reflections of each other[10 15ndash17]

7 Clustering for Classification

By now the iris image has been segmented and transformedinto a rectangular canvas All described moments are appliedand a feature vector is extracted namely 119868 This vectorcontains translation scale and rotation invariant and orien-tation related moments This vector corresponds to variousfeatures of the image hence it is used for classification Anunsupervised approach is adopted for classification using k-means clustering algorithm Using a set of multidimensionalobservations (119909

1 1199092 119909

119899) the k-means algorithm parti-

tions the observations into119870 sets such that119870 le 119899 generatingthe set 119878 = 119878

1 1198782 119878

119896so as to minimize the following

objective function

arg119904

min119870

sum119894=1

sum119909119895isin119878119894

10038161003816100381610038161003816119909119895minus 120583119894

10038161003816100381610038161003816

2

(26)

where 120583119894is mean of all the observations in 119878

119894 Initially

extracted moment vectors for an iris image sample areconsidered to be the initial mean

The k-means Algorithm has two major steps namely theassignment step and the update step The mean is used toassign each observation to a cluster in the assignment stepAn observation is assigned a cluster whose mean makes theclosestmatch Formally this step generated the set 119878

119894such that

119878119894= 1199091199031003816100381610038161003816119909119903 minus 119898119894

1003816100381610038161003816 le10038161003816100381610038161003816119909119903minus 119898119895

10038161003816100381610038161003816forall1 le 119895 le 119870 (27)

Also an observation 119909119903should be associated with exactly

one 119878119894even if two or more differences are found comparable

The next step is based on the identification of a cluster for theobservation established in the previous stepThemean for thecluster is recalculated as the centroid of the observations asgiven in following equation

119898119894=

110038161003816100381610038161198781198941003816100381610038161003816sum119909119895isin119878119894

119909119895 (28)

Both steps are iterated and the centroid is readjusted Thisprocess continues until there is no appreciable change in themeans At this stage themeans have converged and no furthertraining is required [18 19]

8 Results

TheCASIA database containing thousands of images belong-ing to hundreds of different people is used to gather testresults Nearly one-fourth of the iris images from each classare retained as test case while the rest are used for trainingThe distorted images within the database are rejected Irisportion of the image is marked out using the segmentationalgorithm and is later transformed into a rectangular canvasFurther the grey scale rectangular canvas of iris is used

The Scientific World Journal 7

15

10

5

00 1 2 3 4 5 6 7 8 9

times104

times104

(a)

0 1 2 3 4 5 6 7 8 9

times104

times104

10

14

12

10

8

6

4

2

0

(b)

Figure 4 Each of (a) and (b) shows different clusters Notice that all the clusters are linearly separable and can be distinguished by theirEuclidean distance from the Centroid

to compute image moment vector This vector containsinformationwhich is translation scale and rotation invariantand provides orientation information Using the k-meansalgorithm each image is assigned to a cluster The k-meansalgorithm is iterated until convergence is achieved and thecentroid of each cluster is determined Once the system isfully trained it is ready to accept an arbitrary input andprovide a match The model responds with the correlation ofan arbitrary image moments vector to a cluster if the imagebelongs to a known class In Figure 4 various clusters formedare depicted it also shows how the class of a sample is dis-tinguished based upon the Euclidean distance of the featurevector of the sample from the centroid of an arbitrary clusterMoreover Figure 5 shows a confusion matrix depicting theaccuracy of the model The confusion matrix shows that theaccuracy of the model for certain arbitrary classes is 990while the overall accuracy of the model for all the images inthe database is estimated to be 985Moreover it also reportsthe level of confidence of match based on Euclidean distanceof the sample from the centroid of the identified cluster Level0 is the highest which means that the Euclidean distance ofthe sample from the centroid of the cluster is low and level4 is the lowest which indicates that the Euclidean distance ofthe sample from the centroid of any cluster does not lie withina stipulated threshold to confidently indicate a match Figure4 shows the clusters formed using the k-means algorithm

Furthermore a number of experimentswere carried out todetermine the accuracy and efficiency of the proposed modelin comparison with other competitive models In [20] theauthors present a technique which extracts the features ofiris using fine-to-coarse approximation at different resolutionlevels determined through discrete dyadic wavelet transformzero crossing representation The resultant one-dimensionalfeature vector is used to find a match by computing variousdistances with arbitrary feature vectors Ma et al presentyet another iris recognition technique using Gabor filters

[1 6] The authors use a bank of Gabor filters to extract afixed length feature vector signifying the global and localcharacteristics of the iris image A match is established bycomputing the weighted Euclidean distance between featurevectors of arbitrary iris images Daugman in [9] relies onthe morphogenetic randomness of texture in the trabecularmeshwork of iris A failure of statistical independence teston two coded patterns from same iris indicates a matchThis method extracts the visible texture of iris from a realtime video image The image is later encoded into a com-pact sequence of multiscale quadrature 2D Gabor waveletcoefficients The most significant 256 bytes form the iriscode An exclusive OR operation is performed to generate adecision All the above-mentioned techniques including theproposed technique are executed in order to obtain resultsThe genuine acceptance rate (GAR) and false acceptance rate(FAR) are observed for each technique A receiver operatingcharacteristics (ROC) distribution is plotted for each tech-nique based on the results as shown in Figure 6 The ROCdistribution comparatively highlights the accuracy alongwith the frequency of occurrence of errors of the proposedand other current state-of-art models The following sectionbriefly provides some discussion about the proposed systemalong with an interpretation of the ROC distribution formed

9 Conclusion

Through analysis of data obtained after moments extractiona number of conclusions are inferred Images of a certainiris differing in orientation yielded varying eigenvalues andeccentricity However a change in orientation of an imagebarely affects the values of rotation invariant moments whileraw and scale invariant moments are affected Change inorientation of an image affects the Euclidean distance ofthe moment vectors from the centroid Despite this therestill remains a great probability of the image to be classified

8 The Scientific World Journal

0

000

000

000

000

000

000

000

000

000

000

00

0

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

00

0

000

00

0

000

000

00

0

000

00

0

00

0

000

00

0

000

000

000

000

00

0

00

0

00

0

000

000

000

000

000

000

000

000

000

00

0

000

00

0

00

0

00

0

00

0

000

000

000

000

000

00

0

000

000

000

000

000

000

00

0

000

00

0

000

00

0

000

000

000

000

00

0

000

000

000

000

000

000

000

00

0

00

0

00

0

000

000

000

00

0

000

00

0

000

000

000

000

00

0

000

000

000

000

00

0

000

000

000

000

00

0

000

001

051

050

00

0

000

00

0

00

0

000

00

17

83

17

8317

83

17

83

18

88

16

78

16

78

19

93

2098

15

74

15

741574

100 100 100 100 100 100 100 10000

10000

100001000010000

10000

1000010000

10000

1000010000

00 00 00 00 00 00 0010000

10000

944

56938

63

990

10

941

950

50

59

1

2

3

4

5

6

7

8

9

10

11

12

1 2 3 4 5 6 7 8 9 10 11 12

Out

put c

lass

Confusion matrix

Target class

Figure 5 The figure shows the confusion matrix for some arbitrary classes while the accuracy of the model for these classes is 990

1005

1

0995

099

0985

098

0975

097

09650001 001 01 1

AvilaLi MaProposedDaugman

Figure 6The figure illustrates the receiver operating characteristicsdistributions for different competitive techniques including theproposed one

correctly because of coherence in scale invariant momentsAlthough the model exhibits scale and rotation invariantattributes but some impairment is offered by luminosity ofthe image Two arbitrary images of the same objects yieldcomparable moments if the luminosity is the same but theymay yield differing moments in case luminosity is altered Inthe underlying research work it is assumed that the luminos-ity level will be the same for all the images as each imageis obtained by an iriscope working in similar conditionsThe model provides resilience towards variation of scale androtation as compared to other techniques which requires

coherence of phase and size The model can be furtherimproved by incorporation of a technique that will processeach image to provide uniform luminosity Furthermore theROC distribution obtained (shown in Figure 6) from all thetest cases shows that the performance of proposed model iscomparable with Daugman method while it yields a betterperformance than the methods described in [1 6 20]

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] L Ma T Tan YWang and D Zhang ldquoEfficient iris recognitionby characterizing key local variationsrdquo IEEE Transactions onImage Processing vol 13 no 6 pp 739ndash750 2004

[2] D M Monro S Rakshit and D Zhang ldquoDCT-bsed iris recog-nitionrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 29 no 4 pp 586ndash595 2007

[3] K Roy P Bhattacharya and R C Debnath ldquoMulti-class SVMbased iris recognitionrdquo in Proceedings of the 10th InternationalConference on Computer and Information Technology (ICCITrsquo07) pp 1ndash6 Dhaka Bangladesh December 2007

[4] R H Abiyev and K Altunkaya ldquoPersonal iris recognitionusing neural networkrdquo International Journal of Security and ItsApplications vol 2 no 2 pp 41ndash50 2008

The Scientific World Journal 9

[5] Y D Khan F Ahmad and M W Anwar ldquoA neuro-cognitiveapproach for iris recognition using back propagationrdquo WorldApplied Sciences Journal vol 16 no 5 pp 678ndash685 2012

[6] L Ma Y Wang and T Tan ldquoIris recognition based on multi-channel Gabor filteringrdquo in Proceedings of the 5th Asian Con-ference on Computer Vision pp 279ndash283 Melbourne Australia2002

[7] W W Boles and B Boashash ldquoA human identification tech-nique using images of the iris and wavelet transformrdquo IEEETransactions on Signal Processing vol 46 no 4 pp 1185ndash11881998

[8] K Lee S Lim O Byeon and T Kim ldquoEfficient iris recognitionthrough improvement of feature vector and classifierrdquo ETRIJournal vol 23 no 2 pp 61ndash70 2001

[9] J Daugman ldquoHow iris recognition worksrdquo IEEE Transactionson Circuits and Systems for Video Technology vol 14 no 1 pp21ndash30 2004

[10] M-K Hu ldquoVisual pattern recognition by moment invariantsrdquoIRE Tranactions on InformationTheory vol 8 no 2 pp 179ndash1871962

[11] E Arias-Castro and D L Donoho ldquoDoes median filtering trulypreserve edges better than linear filteringrdquo Annals of Statisticsvol 37 no 3 pp 1172ndash1206 2009

[12] J Canny ldquoA computational approach to edge detectionrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol8 no 6 pp 679ndash698 1986

[13] N Kanopoulos N Vasanthavada and R L Baker ldquoDesign ofan image edge detection filter using the Sobel operatorrdquo IEEEJournal of Solid-State Circuits vol 23 no 2 pp 358ndash367 1988

[14] JD FoleyComputerGraphics Principles andPractice Addison-Wesley Reading Mass USA 1996

[15] S O Belkasim M Shridhar and M Ahmadi ldquoPattern recog-nition with moment invariants a comparative study and newresultsrdquo Pattern Recognition vol 24 no 12 pp 1117ndash1138 1991

[16] J Flusser T Suk and B Zitova Moments and Moment Invari-ants in Pattern Recognition JohnWiley amp Sons Chichester UK2009

[17] W T Freeman D B Anderson P A Beardsley et al ldquoCom-puter vision for interactive computer graphicsrdquo IEEE ComputerGraphics and Applications vol 18 no 3 pp 42ndash52 1998

[18] D Aloise A Deshpande P Hansen and P Popat ldquoNP-hardnessof Euclidean sum-of-squares clusteringrdquoMachine Learning vol75 no 2 pp 245ndash248 2009

[19] J A Hartigan and M A Wong ldquoAlgorithm AS 136 a k-meansclustering algorithmrdquo Journal of the Royal Statistical Society CApplied Statistics vol 28 no 1 pp 100ndash108 1979

[20] C Sanchez-Avila and R Sanchez-Reillo ldquoTwo different ap-proaches for iris recognition using Gabor filters and multi-scale zero-crossing representationrdquo Pattern Recognition vol 38no 2 pp 231ndash240 2005

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 6: Research Article Iris Recognition Using Image Moments and ...downloads.hindawi.com/journals/tswj/2014/723595.pdf · Image moments and method of computation of moments are described

6 The Scientific World Journal

Using (20) the eigenvalues of the covariance matrix are easilyobtained and are given as

120582119894=120583101584020+ 120583101584002

2plusmnradic41205831015840211+ (120583101584020minus 120583101584002)2

2

(23)

Notice that these values are proportional to the square ofthe length of the eigenvector axes The difference betweenthe eigenvalues marks yet another important characteristicIt shows how elongated the image is This property is termedeccentricity and is computed as

radic1 minus1205822

1205821

(24)

65 Rotation Invariant Moments Previously we have dis-cussed translation and scale invariant moments In [16]rotation invariant moments are derived which are usuallytermed as a Hu set of invariant moments These are given asfollows

1198681= 12057820+ 12057802

1198682= (12057820minus 12057802)2

+ (212057811)2

1198683= (12057830minus 312057812)2

+ (312057821minus 12057803)2

1198684= (12057830+ 12057812)2

+ (12057821+ 12057803)2

1198685= (12057830minus 312057812) (12057830+ 12057812)

times [(12057830+ 12057812)2

minus 3(12057821+ 12057803)2

]

+ (312057821minus 12057803) (12057821+ 12057803)

times [3(12057830+ 12057812)2

minus (12057821+ 12057803)2

]

1198686= (12057820minus 12057802)

times [(12057830+ 12057812)2

minus (12057821+ 12057803)2

]

+ 412057811(12057830+ 12057812) (12057821+ 12057803)

1198687= (312057821minus 12057803) (12057830+ 12057812)

times [(12057830+ 12057812)2

minus 3(12057821+ 12057803)2

]

minus (12057830minus 312057812) (12057821+ 12057803)

times [3(12057830+ 12057812)2

minus (12057821+ 12057803)2

]

1198688= 12057811[(12057830+ 12057812)2

minus (12057803+ 12057821)2

]

minus (12057820minus 12057802) (12057830+ 12057812) (12057803+ 12057821)

(25)

Every one of the rotation invariant moments extracts a char-acteristic attribute of the image For example 119868

1represents

the moment of inertia along the centroid while 1198687extracts

skew invariant properties which are useful in differentiatingbetween images which are mirror reflections of each other[10 15ndash17]

7 Clustering for Classification

By now the iris image has been segmented and transformedinto a rectangular canvas All described moments are appliedand a feature vector is extracted namely 119868 This vectorcontains translation scale and rotation invariant and orien-tation related moments This vector corresponds to variousfeatures of the image hence it is used for classification Anunsupervised approach is adopted for classification using k-means clustering algorithm Using a set of multidimensionalobservations (119909

1 1199092 119909

119899) the k-means algorithm parti-

tions the observations into119870 sets such that119870 le 119899 generatingthe set 119878 = 119878

1 1198782 119878

119896so as to minimize the following

objective function

arg119904

min119870

sum119894=1

sum119909119895isin119878119894

10038161003816100381610038161003816119909119895minus 120583119894

10038161003816100381610038161003816

2

(26)

where 120583119894is mean of all the observations in 119878

119894 Initially

extracted moment vectors for an iris image sample areconsidered to be the initial mean

The k-means Algorithm has two major steps namely theassignment step and the update step The mean is used toassign each observation to a cluster in the assignment stepAn observation is assigned a cluster whose mean makes theclosestmatch Formally this step generated the set 119878

119894such that

119878119894= 1199091199031003816100381610038161003816119909119903 minus 119898119894

1003816100381610038161003816 le10038161003816100381610038161003816119909119903minus 119898119895

10038161003816100381610038161003816forall1 le 119895 le 119870 (27)

Also an observation 119909119903should be associated with exactly

one 119878119894even if two or more differences are found comparable

The next step is based on the identification of a cluster for theobservation established in the previous stepThemean for thecluster is recalculated as the centroid of the observations asgiven in following equation

119898119894=

110038161003816100381610038161198781198941003816100381610038161003816sum119909119895isin119878119894

119909119895 (28)

Both steps are iterated and the centroid is readjusted Thisprocess continues until there is no appreciable change in themeans At this stage themeans have converged and no furthertraining is required [18 19]

8 Results

TheCASIA database containing thousands of images belong-ing to hundreds of different people is used to gather testresults Nearly one-fourth of the iris images from each classare retained as test case while the rest are used for trainingThe distorted images within the database are rejected Irisportion of the image is marked out using the segmentationalgorithm and is later transformed into a rectangular canvasFurther the grey scale rectangular canvas of iris is used

The Scientific World Journal 7

15

10

5

00 1 2 3 4 5 6 7 8 9

times104

times104

(a)

0 1 2 3 4 5 6 7 8 9

times104

times104

10

14

12

10

8

6

4

2

0

(b)

Figure 4 Each of (a) and (b) shows different clusters Notice that all the clusters are linearly separable and can be distinguished by theirEuclidean distance from the Centroid

to compute image moment vector This vector containsinformationwhich is translation scale and rotation invariantand provides orientation information Using the k-meansalgorithm each image is assigned to a cluster The k-meansalgorithm is iterated until convergence is achieved and thecentroid of each cluster is determined Once the system isfully trained it is ready to accept an arbitrary input andprovide a match The model responds with the correlation ofan arbitrary image moments vector to a cluster if the imagebelongs to a known class In Figure 4 various clusters formedare depicted it also shows how the class of a sample is dis-tinguished based upon the Euclidean distance of the featurevector of the sample from the centroid of an arbitrary clusterMoreover Figure 5 shows a confusion matrix depicting theaccuracy of the model The confusion matrix shows that theaccuracy of the model for certain arbitrary classes is 990while the overall accuracy of the model for all the images inthe database is estimated to be 985Moreover it also reportsthe level of confidence of match based on Euclidean distanceof the sample from the centroid of the identified cluster Level0 is the highest which means that the Euclidean distance ofthe sample from the centroid of the cluster is low and level4 is the lowest which indicates that the Euclidean distance ofthe sample from the centroid of any cluster does not lie withina stipulated threshold to confidently indicate a match Figure4 shows the clusters formed using the k-means algorithm

Furthermore a number of experimentswere carried out todetermine the accuracy and efficiency of the proposed modelin comparison with other competitive models In [20] theauthors present a technique which extracts the features ofiris using fine-to-coarse approximation at different resolutionlevels determined through discrete dyadic wavelet transformzero crossing representation The resultant one-dimensionalfeature vector is used to find a match by computing variousdistances with arbitrary feature vectors Ma et al presentyet another iris recognition technique using Gabor filters

[1 6] The authors use a bank of Gabor filters to extract afixed length feature vector signifying the global and localcharacteristics of the iris image A match is established bycomputing the weighted Euclidean distance between featurevectors of arbitrary iris images Daugman in [9] relies onthe morphogenetic randomness of texture in the trabecularmeshwork of iris A failure of statistical independence teston two coded patterns from same iris indicates a matchThis method extracts the visible texture of iris from a realtime video image The image is later encoded into a com-pact sequence of multiscale quadrature 2D Gabor waveletcoefficients The most significant 256 bytes form the iriscode An exclusive OR operation is performed to generate adecision All the above-mentioned techniques including theproposed technique are executed in order to obtain resultsThe genuine acceptance rate (GAR) and false acceptance rate(FAR) are observed for each technique A receiver operatingcharacteristics (ROC) distribution is plotted for each tech-nique based on the results as shown in Figure 6 The ROCdistribution comparatively highlights the accuracy alongwith the frequency of occurrence of errors of the proposedand other current state-of-art models The following sectionbriefly provides some discussion about the proposed systemalong with an interpretation of the ROC distribution formed

9 Conclusion

Through analysis of data obtained after moments extractiona number of conclusions are inferred Images of a certainiris differing in orientation yielded varying eigenvalues andeccentricity However a change in orientation of an imagebarely affects the values of rotation invariant moments whileraw and scale invariant moments are affected Change inorientation of an image affects the Euclidean distance ofthe moment vectors from the centroid Despite this therestill remains a great probability of the image to be classified

8 The Scientific World Journal

0

000

000

000

000

000

000

000

000

000

000

00

0

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

00

0

000

00

0

000

000

00

0

000

00

0

00

0

000

00

0

000

000

000

000

00

0

00

0

00

0

000

000

000

000

000

000

000

000

000

00

0

000

00

0

00

0

00

0

00

0

000

000

000

000

000

00

0

000

000

000

000

000

000

00

0

000

00

0

000

00

0

000

000

000

000

00

0

000

000

000

000

000

000

000

00

0

00

0

00

0

000

000

000

00

0

000

00

0

000

000

000

000

00

0

000

000

000

000

00

0

000

000

000

000

00

0

000

001

051

050

00

0

000

00

0

00

0

000

00

17

83

17

8317

83

17

83

18

88

16

78

16

78

19

93

2098

15

74

15

741574

100 100 100 100 100 100 100 10000

10000

100001000010000

10000

1000010000

10000

1000010000

00 00 00 00 00 00 0010000

10000

944

56938

63

990

10

941

950

50

59

1

2

3

4

5

6

7

8

9

10

11

12

1 2 3 4 5 6 7 8 9 10 11 12

Out

put c

lass

Confusion matrix

Target class

Figure 5 The figure shows the confusion matrix for some arbitrary classes while the accuracy of the model for these classes is 990

1005

1

0995

099

0985

098

0975

097

09650001 001 01 1

AvilaLi MaProposedDaugman

Figure 6The figure illustrates the receiver operating characteristicsdistributions for different competitive techniques including theproposed one

correctly because of coherence in scale invariant momentsAlthough the model exhibits scale and rotation invariantattributes but some impairment is offered by luminosity ofthe image Two arbitrary images of the same objects yieldcomparable moments if the luminosity is the same but theymay yield differing moments in case luminosity is altered Inthe underlying research work it is assumed that the luminos-ity level will be the same for all the images as each imageis obtained by an iriscope working in similar conditionsThe model provides resilience towards variation of scale androtation as compared to other techniques which requires

coherence of phase and size The model can be furtherimproved by incorporation of a technique that will processeach image to provide uniform luminosity Furthermore theROC distribution obtained (shown in Figure 6) from all thetest cases shows that the performance of proposed model iscomparable with Daugman method while it yields a betterperformance than the methods described in [1 6 20]

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] L Ma T Tan YWang and D Zhang ldquoEfficient iris recognitionby characterizing key local variationsrdquo IEEE Transactions onImage Processing vol 13 no 6 pp 739ndash750 2004

[2] D M Monro S Rakshit and D Zhang ldquoDCT-bsed iris recog-nitionrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 29 no 4 pp 586ndash595 2007

[3] K Roy P Bhattacharya and R C Debnath ldquoMulti-class SVMbased iris recognitionrdquo in Proceedings of the 10th InternationalConference on Computer and Information Technology (ICCITrsquo07) pp 1ndash6 Dhaka Bangladesh December 2007

[4] R H Abiyev and K Altunkaya ldquoPersonal iris recognitionusing neural networkrdquo International Journal of Security and ItsApplications vol 2 no 2 pp 41ndash50 2008

The Scientific World Journal 9

[5] Y D Khan F Ahmad and M W Anwar ldquoA neuro-cognitiveapproach for iris recognition using back propagationrdquo WorldApplied Sciences Journal vol 16 no 5 pp 678ndash685 2012

[6] L Ma Y Wang and T Tan ldquoIris recognition based on multi-channel Gabor filteringrdquo in Proceedings of the 5th Asian Con-ference on Computer Vision pp 279ndash283 Melbourne Australia2002

[7] W W Boles and B Boashash ldquoA human identification tech-nique using images of the iris and wavelet transformrdquo IEEETransactions on Signal Processing vol 46 no 4 pp 1185ndash11881998

[8] K Lee S Lim O Byeon and T Kim ldquoEfficient iris recognitionthrough improvement of feature vector and classifierrdquo ETRIJournal vol 23 no 2 pp 61ndash70 2001

[9] J Daugman ldquoHow iris recognition worksrdquo IEEE Transactionson Circuits and Systems for Video Technology vol 14 no 1 pp21ndash30 2004

[10] M-K Hu ldquoVisual pattern recognition by moment invariantsrdquoIRE Tranactions on InformationTheory vol 8 no 2 pp 179ndash1871962

[11] E Arias-Castro and D L Donoho ldquoDoes median filtering trulypreserve edges better than linear filteringrdquo Annals of Statisticsvol 37 no 3 pp 1172ndash1206 2009

[12] J Canny ldquoA computational approach to edge detectionrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol8 no 6 pp 679ndash698 1986

[13] N Kanopoulos N Vasanthavada and R L Baker ldquoDesign ofan image edge detection filter using the Sobel operatorrdquo IEEEJournal of Solid-State Circuits vol 23 no 2 pp 358ndash367 1988

[14] JD FoleyComputerGraphics Principles andPractice Addison-Wesley Reading Mass USA 1996

[15] S O Belkasim M Shridhar and M Ahmadi ldquoPattern recog-nition with moment invariants a comparative study and newresultsrdquo Pattern Recognition vol 24 no 12 pp 1117ndash1138 1991

[16] J Flusser T Suk and B Zitova Moments and Moment Invari-ants in Pattern Recognition JohnWiley amp Sons Chichester UK2009

[17] W T Freeman D B Anderson P A Beardsley et al ldquoCom-puter vision for interactive computer graphicsrdquo IEEE ComputerGraphics and Applications vol 18 no 3 pp 42ndash52 1998

[18] D Aloise A Deshpande P Hansen and P Popat ldquoNP-hardnessof Euclidean sum-of-squares clusteringrdquoMachine Learning vol75 no 2 pp 245ndash248 2009

[19] J A Hartigan and M A Wong ldquoAlgorithm AS 136 a k-meansclustering algorithmrdquo Journal of the Royal Statistical Society CApplied Statistics vol 28 no 1 pp 100ndash108 1979

[20] C Sanchez-Avila and R Sanchez-Reillo ldquoTwo different ap-proaches for iris recognition using Gabor filters and multi-scale zero-crossing representationrdquo Pattern Recognition vol 38no 2 pp 231ndash240 2005

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 7: Research Article Iris Recognition Using Image Moments and ...downloads.hindawi.com/journals/tswj/2014/723595.pdf · Image moments and method of computation of moments are described

The Scientific World Journal 7

15

10

5

00 1 2 3 4 5 6 7 8 9

times104

times104

(a)

0 1 2 3 4 5 6 7 8 9

times104

times104

10

14

12

10

8

6

4

2

0

(b)

Figure 4 Each of (a) and (b) shows different clusters Notice that all the clusters are linearly separable and can be distinguished by theirEuclidean distance from the Centroid

to compute image moment vector This vector containsinformationwhich is translation scale and rotation invariantand provides orientation information Using the k-meansalgorithm each image is assigned to a cluster The k-meansalgorithm is iterated until convergence is achieved and thecentroid of each cluster is determined Once the system isfully trained it is ready to accept an arbitrary input andprovide a match The model responds with the correlation ofan arbitrary image moments vector to a cluster if the imagebelongs to a known class In Figure 4 various clusters formedare depicted it also shows how the class of a sample is dis-tinguished based upon the Euclidean distance of the featurevector of the sample from the centroid of an arbitrary clusterMoreover Figure 5 shows a confusion matrix depicting theaccuracy of the model The confusion matrix shows that theaccuracy of the model for certain arbitrary classes is 990while the overall accuracy of the model for all the images inthe database is estimated to be 985Moreover it also reportsthe level of confidence of match based on Euclidean distanceof the sample from the centroid of the identified cluster Level0 is the highest which means that the Euclidean distance ofthe sample from the centroid of the cluster is low and level4 is the lowest which indicates that the Euclidean distance ofthe sample from the centroid of any cluster does not lie withina stipulated threshold to confidently indicate a match Figure4 shows the clusters formed using the k-means algorithm

Furthermore a number of experimentswere carried out todetermine the accuracy and efficiency of the proposed modelin comparison with other competitive models In [20] theauthors present a technique which extracts the features ofiris using fine-to-coarse approximation at different resolutionlevels determined through discrete dyadic wavelet transformzero crossing representation The resultant one-dimensionalfeature vector is used to find a match by computing variousdistances with arbitrary feature vectors Ma et al presentyet another iris recognition technique using Gabor filters

[1 6] The authors use a bank of Gabor filters to extract afixed length feature vector signifying the global and localcharacteristics of the iris image A match is established bycomputing the weighted Euclidean distance between featurevectors of arbitrary iris images Daugman in [9] relies onthe morphogenetic randomness of texture in the trabecularmeshwork of iris A failure of statistical independence teston two coded patterns from same iris indicates a matchThis method extracts the visible texture of iris from a realtime video image The image is later encoded into a com-pact sequence of multiscale quadrature 2D Gabor waveletcoefficients The most significant 256 bytes form the iriscode An exclusive OR operation is performed to generate adecision All the above-mentioned techniques including theproposed technique are executed in order to obtain resultsThe genuine acceptance rate (GAR) and false acceptance rate(FAR) are observed for each technique A receiver operatingcharacteristics (ROC) distribution is plotted for each tech-nique based on the results as shown in Figure 6 The ROCdistribution comparatively highlights the accuracy alongwith the frequency of occurrence of errors of the proposedand other current state-of-art models The following sectionbriefly provides some discussion about the proposed systemalong with an interpretation of the ROC distribution formed

9 Conclusion

Through analysis of data obtained after moments extractiona number of conclusions are inferred Images of a certainiris differing in orientation yielded varying eigenvalues andeccentricity However a change in orientation of an imagebarely affects the values of rotation invariant moments whileraw and scale invariant moments are affected Change inorientation of an image affects the Euclidean distance ofthe moment vectors from the centroid Despite this therestill remains a great probability of the image to be classified

8 The Scientific World Journal

0

000

000

000

000

000

000

000

000

000

000

00

0

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

00

0

000

00

0

000

000

00

0

000

00

0

00

0

000

00

0

000

000

000

000

00

0

00

0

00

0

000

000

000

000

000

000

000

000

000

00

0

000

00

0

00

0

00

0

00

0

000

000

000

000

000

00

0

000

000

000

000

000

000

00

0

000

00

0

000

00

0

000

000

000

000

00

0

000

000

000

000

000

000

000

00

0

00

0

00

0

000

000

000

00

0

000

00

0

000

000

000

000

00

0

000

000

000

000

00

0

000

000

000

000

00

0

000

001

051

050

00

0

000

00

0

00

0

000

00

17

83

17

8317

83

17

83

18

88

16

78

16

78

19

93

2098

15

74

15

741574

100 100 100 100 100 100 100 10000

10000

100001000010000

10000

1000010000

10000

1000010000

00 00 00 00 00 00 0010000

10000

944

56938

63

990

10

941

950

50

59

1

2

3

4

5

6

7

8

9

10

11

12

1 2 3 4 5 6 7 8 9 10 11 12

Out

put c

lass

Confusion matrix

Target class

Figure 5 The figure shows the confusion matrix for some arbitrary classes while the accuracy of the model for these classes is 990

1005

1

0995

099

0985

098

0975

097

09650001 001 01 1

AvilaLi MaProposedDaugman

Figure 6The figure illustrates the receiver operating characteristicsdistributions for different competitive techniques including theproposed one

correctly because of coherence in scale invariant momentsAlthough the model exhibits scale and rotation invariantattributes but some impairment is offered by luminosity ofthe image Two arbitrary images of the same objects yieldcomparable moments if the luminosity is the same but theymay yield differing moments in case luminosity is altered Inthe underlying research work it is assumed that the luminos-ity level will be the same for all the images as each imageis obtained by an iriscope working in similar conditionsThe model provides resilience towards variation of scale androtation as compared to other techniques which requires

coherence of phase and size The model can be furtherimproved by incorporation of a technique that will processeach image to provide uniform luminosity Furthermore theROC distribution obtained (shown in Figure 6) from all thetest cases shows that the performance of proposed model iscomparable with Daugman method while it yields a betterperformance than the methods described in [1 6 20]

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] L Ma T Tan YWang and D Zhang ldquoEfficient iris recognitionby characterizing key local variationsrdquo IEEE Transactions onImage Processing vol 13 no 6 pp 739ndash750 2004

[2] D M Monro S Rakshit and D Zhang ldquoDCT-bsed iris recog-nitionrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 29 no 4 pp 586ndash595 2007

[3] K Roy P Bhattacharya and R C Debnath ldquoMulti-class SVMbased iris recognitionrdquo in Proceedings of the 10th InternationalConference on Computer and Information Technology (ICCITrsquo07) pp 1ndash6 Dhaka Bangladesh December 2007

[4] R H Abiyev and K Altunkaya ldquoPersonal iris recognitionusing neural networkrdquo International Journal of Security and ItsApplications vol 2 no 2 pp 41ndash50 2008

The Scientific World Journal 9

[5] Y D Khan F Ahmad and M W Anwar ldquoA neuro-cognitiveapproach for iris recognition using back propagationrdquo WorldApplied Sciences Journal vol 16 no 5 pp 678ndash685 2012

[6] L Ma Y Wang and T Tan ldquoIris recognition based on multi-channel Gabor filteringrdquo in Proceedings of the 5th Asian Con-ference on Computer Vision pp 279ndash283 Melbourne Australia2002

[7] W W Boles and B Boashash ldquoA human identification tech-nique using images of the iris and wavelet transformrdquo IEEETransactions on Signal Processing vol 46 no 4 pp 1185ndash11881998

[8] K Lee S Lim O Byeon and T Kim ldquoEfficient iris recognitionthrough improvement of feature vector and classifierrdquo ETRIJournal vol 23 no 2 pp 61ndash70 2001

[9] J Daugman ldquoHow iris recognition worksrdquo IEEE Transactionson Circuits and Systems for Video Technology vol 14 no 1 pp21ndash30 2004

[10] M-K Hu ldquoVisual pattern recognition by moment invariantsrdquoIRE Tranactions on InformationTheory vol 8 no 2 pp 179ndash1871962

[11] E Arias-Castro and D L Donoho ldquoDoes median filtering trulypreserve edges better than linear filteringrdquo Annals of Statisticsvol 37 no 3 pp 1172ndash1206 2009

[12] J Canny ldquoA computational approach to edge detectionrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol8 no 6 pp 679ndash698 1986

[13] N Kanopoulos N Vasanthavada and R L Baker ldquoDesign ofan image edge detection filter using the Sobel operatorrdquo IEEEJournal of Solid-State Circuits vol 23 no 2 pp 358ndash367 1988

[14] JD FoleyComputerGraphics Principles andPractice Addison-Wesley Reading Mass USA 1996

[15] S O Belkasim M Shridhar and M Ahmadi ldquoPattern recog-nition with moment invariants a comparative study and newresultsrdquo Pattern Recognition vol 24 no 12 pp 1117ndash1138 1991

[16] J Flusser T Suk and B Zitova Moments and Moment Invari-ants in Pattern Recognition JohnWiley amp Sons Chichester UK2009

[17] W T Freeman D B Anderson P A Beardsley et al ldquoCom-puter vision for interactive computer graphicsrdquo IEEE ComputerGraphics and Applications vol 18 no 3 pp 42ndash52 1998

[18] D Aloise A Deshpande P Hansen and P Popat ldquoNP-hardnessof Euclidean sum-of-squares clusteringrdquoMachine Learning vol75 no 2 pp 245ndash248 2009

[19] J A Hartigan and M A Wong ldquoAlgorithm AS 136 a k-meansclustering algorithmrdquo Journal of the Royal Statistical Society CApplied Statistics vol 28 no 1 pp 100ndash108 1979

[20] C Sanchez-Avila and R Sanchez-Reillo ldquoTwo different ap-proaches for iris recognition using Gabor filters and multi-scale zero-crossing representationrdquo Pattern Recognition vol 38no 2 pp 231ndash240 2005

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 8: Research Article Iris Recognition Using Image Moments and ...downloads.hindawi.com/journals/tswj/2014/723595.pdf · Image moments and method of computation of moments are described

8 The Scientific World Journal

0

000

000

000

000

000

000

000

000

000

000

00

0

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

000

00

0

000

00

0

000

000

00

0

000

00

0

00

0

000

00

0

000

000

000

000

00

0

00

0

00

0

000

000

000

000

000

000

000

000

000

00

0

000

00

0

00

0

00

0

00

0

000

000

000

000

000

00

0

000

000

000

000

000

000

00

0

000

00

0

000

00

0

000

000

000

000

00

0

000

000

000

000

000

000

000

00

0

00

0

00

0

000

000

000

00

0

000

00

0

000

000

000

000

00

0

000

000

000

000

00

0

000

000

000

000

00

0

000

001

051

050

00

0

000

00

0

00

0

000

00

17

83

17

8317

83

17

83

18

88

16

78

16

78

19

93

2098

15

74

15

741574

100 100 100 100 100 100 100 10000

10000

100001000010000

10000

1000010000

10000

1000010000

00 00 00 00 00 00 0010000

10000

944

56938

63

990

10

941

950

50

59

1

2

3

4

5

6

7

8

9

10

11

12

1 2 3 4 5 6 7 8 9 10 11 12

Out

put c

lass

Confusion matrix

Target class

Figure 5 The figure shows the confusion matrix for some arbitrary classes while the accuracy of the model for these classes is 990

1005

1

0995

099

0985

098

0975

097

09650001 001 01 1

AvilaLi MaProposedDaugman

Figure 6The figure illustrates the receiver operating characteristicsdistributions for different competitive techniques including theproposed one

correctly because of coherence in scale invariant momentsAlthough the model exhibits scale and rotation invariantattributes but some impairment is offered by luminosity ofthe image Two arbitrary images of the same objects yieldcomparable moments if the luminosity is the same but theymay yield differing moments in case luminosity is altered Inthe underlying research work it is assumed that the luminos-ity level will be the same for all the images as each imageis obtained by an iriscope working in similar conditionsThe model provides resilience towards variation of scale androtation as compared to other techniques which requires

coherence of phase and size The model can be furtherimproved by incorporation of a technique that will processeach image to provide uniform luminosity Furthermore theROC distribution obtained (shown in Figure 6) from all thetest cases shows that the performance of proposed model iscomparable with Daugman method while it yields a betterperformance than the methods described in [1 6 20]

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] L Ma T Tan YWang and D Zhang ldquoEfficient iris recognitionby characterizing key local variationsrdquo IEEE Transactions onImage Processing vol 13 no 6 pp 739ndash750 2004

[2] D M Monro S Rakshit and D Zhang ldquoDCT-bsed iris recog-nitionrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 29 no 4 pp 586ndash595 2007

[3] K Roy P Bhattacharya and R C Debnath ldquoMulti-class SVMbased iris recognitionrdquo in Proceedings of the 10th InternationalConference on Computer and Information Technology (ICCITrsquo07) pp 1ndash6 Dhaka Bangladesh December 2007

[4] R H Abiyev and K Altunkaya ldquoPersonal iris recognitionusing neural networkrdquo International Journal of Security and ItsApplications vol 2 no 2 pp 41ndash50 2008

The Scientific World Journal 9

[5] Y D Khan F Ahmad and M W Anwar ldquoA neuro-cognitiveapproach for iris recognition using back propagationrdquo WorldApplied Sciences Journal vol 16 no 5 pp 678ndash685 2012

[6] L Ma Y Wang and T Tan ldquoIris recognition based on multi-channel Gabor filteringrdquo in Proceedings of the 5th Asian Con-ference on Computer Vision pp 279ndash283 Melbourne Australia2002

[7] W W Boles and B Boashash ldquoA human identification tech-nique using images of the iris and wavelet transformrdquo IEEETransactions on Signal Processing vol 46 no 4 pp 1185ndash11881998

[8] K Lee S Lim O Byeon and T Kim ldquoEfficient iris recognitionthrough improvement of feature vector and classifierrdquo ETRIJournal vol 23 no 2 pp 61ndash70 2001

[9] J Daugman ldquoHow iris recognition worksrdquo IEEE Transactionson Circuits and Systems for Video Technology vol 14 no 1 pp21ndash30 2004

[10] M-K Hu ldquoVisual pattern recognition by moment invariantsrdquoIRE Tranactions on InformationTheory vol 8 no 2 pp 179ndash1871962

[11] E Arias-Castro and D L Donoho ldquoDoes median filtering trulypreserve edges better than linear filteringrdquo Annals of Statisticsvol 37 no 3 pp 1172ndash1206 2009

[12] J Canny ldquoA computational approach to edge detectionrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol8 no 6 pp 679ndash698 1986

[13] N Kanopoulos N Vasanthavada and R L Baker ldquoDesign ofan image edge detection filter using the Sobel operatorrdquo IEEEJournal of Solid-State Circuits vol 23 no 2 pp 358ndash367 1988

[14] JD FoleyComputerGraphics Principles andPractice Addison-Wesley Reading Mass USA 1996

[15] S O Belkasim M Shridhar and M Ahmadi ldquoPattern recog-nition with moment invariants a comparative study and newresultsrdquo Pattern Recognition vol 24 no 12 pp 1117ndash1138 1991

[16] J Flusser T Suk and B Zitova Moments and Moment Invari-ants in Pattern Recognition JohnWiley amp Sons Chichester UK2009

[17] W T Freeman D B Anderson P A Beardsley et al ldquoCom-puter vision for interactive computer graphicsrdquo IEEE ComputerGraphics and Applications vol 18 no 3 pp 42ndash52 1998

[18] D Aloise A Deshpande P Hansen and P Popat ldquoNP-hardnessof Euclidean sum-of-squares clusteringrdquoMachine Learning vol75 no 2 pp 245ndash248 2009

[19] J A Hartigan and M A Wong ldquoAlgorithm AS 136 a k-meansclustering algorithmrdquo Journal of the Royal Statistical Society CApplied Statistics vol 28 no 1 pp 100ndash108 1979

[20] C Sanchez-Avila and R Sanchez-Reillo ldquoTwo different ap-proaches for iris recognition using Gabor filters and multi-scale zero-crossing representationrdquo Pattern Recognition vol 38no 2 pp 231ndash240 2005

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 9: Research Article Iris Recognition Using Image Moments and ...downloads.hindawi.com/journals/tswj/2014/723595.pdf · Image moments and method of computation of moments are described

The Scientific World Journal 9

[5] Y D Khan F Ahmad and M W Anwar ldquoA neuro-cognitiveapproach for iris recognition using back propagationrdquo WorldApplied Sciences Journal vol 16 no 5 pp 678ndash685 2012

[6] L Ma Y Wang and T Tan ldquoIris recognition based on multi-channel Gabor filteringrdquo in Proceedings of the 5th Asian Con-ference on Computer Vision pp 279ndash283 Melbourne Australia2002

[7] W W Boles and B Boashash ldquoA human identification tech-nique using images of the iris and wavelet transformrdquo IEEETransactions on Signal Processing vol 46 no 4 pp 1185ndash11881998

[8] K Lee S Lim O Byeon and T Kim ldquoEfficient iris recognitionthrough improvement of feature vector and classifierrdquo ETRIJournal vol 23 no 2 pp 61ndash70 2001

[9] J Daugman ldquoHow iris recognition worksrdquo IEEE Transactionson Circuits and Systems for Video Technology vol 14 no 1 pp21ndash30 2004

[10] M-K Hu ldquoVisual pattern recognition by moment invariantsrdquoIRE Tranactions on InformationTheory vol 8 no 2 pp 179ndash1871962

[11] E Arias-Castro and D L Donoho ldquoDoes median filtering trulypreserve edges better than linear filteringrdquo Annals of Statisticsvol 37 no 3 pp 1172ndash1206 2009

[12] J Canny ldquoA computational approach to edge detectionrdquo IEEETransactions on Pattern Analysis and Machine Intelligence vol8 no 6 pp 679ndash698 1986

[13] N Kanopoulos N Vasanthavada and R L Baker ldquoDesign ofan image edge detection filter using the Sobel operatorrdquo IEEEJournal of Solid-State Circuits vol 23 no 2 pp 358ndash367 1988

[14] JD FoleyComputerGraphics Principles andPractice Addison-Wesley Reading Mass USA 1996

[15] S O Belkasim M Shridhar and M Ahmadi ldquoPattern recog-nition with moment invariants a comparative study and newresultsrdquo Pattern Recognition vol 24 no 12 pp 1117ndash1138 1991

[16] J Flusser T Suk and B Zitova Moments and Moment Invari-ants in Pattern Recognition JohnWiley amp Sons Chichester UK2009

[17] W T Freeman D B Anderson P A Beardsley et al ldquoCom-puter vision for interactive computer graphicsrdquo IEEE ComputerGraphics and Applications vol 18 no 3 pp 42ndash52 1998

[18] D Aloise A Deshpande P Hansen and P Popat ldquoNP-hardnessof Euclidean sum-of-squares clusteringrdquoMachine Learning vol75 no 2 pp 245ndash248 2009

[19] J A Hartigan and M A Wong ldquoAlgorithm AS 136 a k-meansclustering algorithmrdquo Journal of the Royal Statistical Society CApplied Statistics vol 28 no 1 pp 100ndash108 1979

[20] C Sanchez-Avila and R Sanchez-Reillo ldquoTwo different ap-proaches for iris recognition using Gabor filters and multi-scale zero-crossing representationrdquo Pattern Recognition vol 38no 2 pp 231ndash240 2005

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 10: Research Article Iris Recognition Using Image Moments and ...downloads.hindawi.com/journals/tswj/2014/723595.pdf · Image moments and method of computation of moments are described

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014