efficient identification based on human iris …sharat/icvgip.org/icvgip2004/...levels of a...

7
Efficient Identification Based on Human Iris Patterns A. Chitra R. Bremananth Department of CSE, Department of CSE, PSG College of Technology, PSG College of Technology, Coimbatore-641004, India Coimbatore-641004, India [email protected] [email protected] Abstract This paper proposes a person identification system (PID), which works with non-illumination low-resolution eye images. Radial-Scan and threshold methods are used for Iris normalization and Iris segmentation from the eye image respectively. Conventional neural network is used for iris matching. The proposed algorithm works with 96 features of the iris. Space complexity is less and Iris codes are stored in an encrypted file. The Algorithm has been im- plemented and tested on 1500 iris patterns. Images include normal iris captured under in-door and out-door conditions and from disease affected eyes. 1. Introduction Intelligent and reliable biometric recognition system has long been an effective goal for any security applications. Physiological or behavioral characteristics are used to ac- curately identify each person in biometrics [4]. Iris is a protected internal organ whose random texture is epigenetic and stable through life. Hence, it can also serve as a kind of living password. Iris recognition provides real-time, high confidence identification of person by mathematical analy- sis of complex patterns that are visible within the iris of an eye from 19 to 24 inches distance. Figure 1 shows a sample iris image and the structure of the iris. Its complex pattern contains many distinctive feature such as arching ligaments, crypts, radial furrows, pigment frill, papillary area, ciliary area, rings, corona, freckles and zigzag collarette [4][5]. These spatial patterns in the iris are unique in nature. Iris is more stable and reliable for identification than face, finger- prints, and voiceprints. This technology works well in both recognition and confirmation phases. This is non-intrusive and non-invasive [6]. It does not require physical contact with a scanner. The key issue of the pattern recognition problem is the relations between inter class and intra class variability. That is, classes can be efficiently classified only if the variability between features of a given class is less than the variability between other classes. In iris recogni- tion process, variability of iris intra class features are less than the inter class iris variability. In face recognition, intra class (same face) variability is greater than interclass vari- ability. But in interclass variability is limited because dif- ferent classes lave same basic set of features. For these rea- sons, Iris recognition system becomes most promising tech- nique for the person identification in the vicinity of future. Crypts Radial furrows Pigment fril l Pupilary Area Ciliary Area Collarette Pupil Iris (a) (b) Figure 1. A Sample Eye image and Structure of the Iris The organization of this paper is as follows: Section 2 summarizes the literature on the previous research. A pro- posed method for iris recognition is illustrated in Section 3. Experiment and Results are given in section 4. Section 5 gives the concluding remarks on the work. 2. Review of literature In the process of iris recognition, it is essential to con- vert an acquired iris image into a suitable code that can be easily manipulated. Li ma et al proposed a method based on Multi channel Gabor filters and near line discriminator [6]. Boles and Boashash [1] calculated a zero-crossing rep- resentation of 1D wavelet transform at various resolution levels of a concentric circle on an iris image to characterize the texture of the iris. Iris matching was based on two dis- similarity functions. Wildes et al. [13] represented the iris texture with a Laplacian pyramid constructed with four dif- ferent resolution levels and used the normalized correlation

Upload: others

Post on 08-Mar-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Efficient Identification Based on Human Iris …sharat/icvgip.org/icvgip2004/...levels of a concentric circle on an iris image to characterize the texture of the iris. Iris matching

Efficient Identification Based on Human Iris Patterns

A. Chitra R. BremananthDepartment of CSE, Department of CSE,

PSG College of Technology, PSG College of Technology,Coimbatore-641004, India Coimbatore-641004, India

[email protected] [email protected]

Abstract

This paper proposes a person identification system(PID), which works with non-illumination low-resolutioneye images. Radial-Scan and threshold methods are usedfor Iris normalization and Iris segmentation from the eyeimage respectively. Conventional neural network is usedfor iris matching. The proposed algorithm works with 96features of the iris. Space complexity is less and Iris codesare stored in an encrypted file. The Algorithm has been im-plemented and tested on 1500 iris patterns. Images includenormal iris captured under in-door and out-door conditionsand from disease affected eyes.

1. Introduction

Intelligent and reliable biometric recognition system haslong been an effective goal for any security applications.Physiological or behavioral characteristics are used to ac-curately identify each person in biometrics [4]. Iris is aprotected internal organ whose random texture is epigeneticand stable through life. Hence, it can also serve as a kindof living password. Iris recognition provides real-time, highconfidence identification of person by mathematical analy-sis of complex patterns that are visible within the iris of aneye from 19 to 24 inches distance. Figure 1 shows a sampleiris image and the structure of the iris. Its complex patterncontains many distinctive feature such as arching ligaments,crypts, radial furrows, pigment frill, papillary area, ciliaryarea, rings, corona, freckles and zigzag collarette [4][5].These spatial patterns in the iris are unique in nature. Iris ismore stable and reliable for identification than face, finger-prints, and voiceprints. This technology works well in bothrecognition and confirmation phases. This is non-intrusiveand non-invasive [6]. It does not require physical contactwith a scanner. The key issue of the pattern recognitionproblem is the relations between inter class and intra classvariability. That is, classes can be efficiently classified only

if the variability between features of a given class is lessthan the variability between other classes. In iris recogni-tion process, variability of iris intra class features are lessthan the inter class iris variability. In face recognition, intraclass (same face) variability is greater than interclass vari-ability. But in interclass variability is limited because dif-ferent classes lave same basic set of features. For these rea-sons, Iris recognition system becomes most promising tech-nique for the person identification in the vicinity of future.

Crypts

Radial furrows

Pigment fril l

Pupilary Area Ciliary Area

Collarette

Pupil

Iris

(a) (b)

Figure 1. A Sample Eye image and Structure of the Iris

The organization of this paper is as follows: Section 2summarizes the literature on the previous research. A pro-posed method for iris recognition is illustrated in Section 3.Experiment and Results are given in section 4. Section 5gives the concluding remarks on the work.

2. Review of literature

In the process of iris recognition, it is essential to con-vert an acquired iris image into a suitable code that can beeasily manipulated. Li ma et al proposed a method basedon Multi channel Gabor filters and near line discriminator[6]. Boles and Boashash [1] calculated a zero-crossing rep-resentation of 1D wavelet transform at various resolutionlevels of a concentric circle on an iris image to characterizethe texture of the iris. Iris matching was based on two dis-similarity functions. Wildes et al. [13] represented the iristexture with a Laplacian pyramid constructed with four dif-ferent resolution levels and used the normalized correlation

Page 2: Efficient Identification Based on Human Iris …sharat/icvgip.org/icvgip2004/...levels of a concentric circle on an iris image to characterize the texture of the iris. Iris matching

to determine whether the input image and the model im-age are from the same class. Daugman [3] used multi scalequadrature wavelets to extract texture phase structure infor-mation of the iris to generate a 2,048-bit (256 bytes) iriscode. A pair of iris representations is compared based onHamming distance. In [11], Sanchez-Reillo and Sanchez-Avila provided a partial implementation of the algorithm byDaugman. Shinyoung Lim et al. [12] decomposed an irisimage into four levels using 2D Haar wavelet transform.In [2], Circular symmetric and DDA (Digital DifferentialAnalysis) were used for Iris localization and normalizationrespectively. 360 features were used for classification andclassifier design was carried out by WED (Weighted Euclid-ean distance). In this paper, we propose an iris recognitionsystem, with a compact representation scheme for iris pat-terns by Gabor spatial filter. Conventional neural networkis used for iris matching. Image validity module is includedfor real image checking to prevent illegal entries.

3. Iris recognition algorithms

The block diagram of the proposed system is describedin Figure 2. It consists of the following modules. An im-portant and complex step of iris recognition system is im-age acquisition. Especially for Indian people, iris is small insize and dark in color. It is difficult to acquire clear imagesfor analysis using the standard CCD camera with ordinarylighting. In this module, the Iris Camera captures user’seye image by passing the near infrared (NIR) wave. Re-search in iris image acquisition [8], [10], has made noninva-sive imaging at a distance possible. The image acquisitionphase should consider three main aspects, namely, the light-ing system, the positioning system, and the physical capturesystem. Iris recognition system can work both in out-doorand in-door without any hot spot of lighting conditions.

Biometric features may be counterfeited and criminallyused. It is a crucial weakness of biometric system. Iris va-lidity checking aims to ensure that an input image is actu-ally coming from a person instead of an iris photograph,a phony eye, or other artificial sources. Daugman [3] [5],Li Ma [6], Wildes [13] had discussed about this with somepossible schemes and did not document a specific method.Biometrics-hoax-resistant method is suggested to overcomethis problem. However image validity checking is still lim-ited, though it is highly desirable.

Image prominence checking module solves the problemof how to select an apparent and well-focused iris imagefrom an image acquisition sequence. This involves Graylevel checking, detecting inter frame of eyelash, detectingspectacles and truncation of iris. Iris recognition includesseven modules: Image validity checking, image prominencechecking, iris extraction, iris normalization, image intensi-fication, Feature extraction and iris matching. Detailed de-

Eye Image Acquisition

Image Validity Checking

Is Genuine

Image Prominence Checking

Is Genuine

C

C

Iris inner and outer boundary detection and Normalization

Iris image Intensification

Iris feature extraction and Iris matching

Is Genuine

Accept Reject

N

Y

N

Y

N

Y

Figure 2. Block diagram of the Proposed System

scription of each module is as follows.

3.1. Image validity checking

In this module biometrics-hoax-resistant method is usedto check iris validity. In this method, NIR echo time is usedto find the difference between genuine source and artificialsources. It assures that an input is coming from a real se-quence instead of a photograph or other artificial sources.The biometrics capturing devices need to be capable of en-suring that they are inspecting genuine user features (as op-posed to a photograph or recording) and that the output sig-nal cannot be substituted. This will help prevent replay at-tacks from lifted by video-signal for instance by connectinga video recorder to the frame-grabber.

3.2. Image prominence Checking

During Iris image capturing it may produce differenttypes of noise images. Figure 3 shows images of iris indifferent condition. It is necessary to select image of highquality from an input. Image prominence Checking is animportant process of iris recognition to validate the qualityof an input eye image. It prevents miss-extraction and falsematches in feature extraction and iris matching modules re-spectively. However, effort on image prominence checkingis still limited. Daugman measured the total high frequencypower in the 2D Fourier spectrum of an image to assess thefocus of the image. If an image can pass a minimum focus

Page 3: Efficient Identification Based on Human Iris …sharat/icvgip.org/icvgip2004/...levels of a concentric circle on an iris image to characterize the texture of the iris. Iris matching

(a) (c) (b)

Figure 3. Images of Irises a. Interfered by eyelashes b.

Truncation of Iris c. Spectacles reflection

criterion, it will be used for recognition. Here, we presentan effective scheme to review image quality by analyzingthe spatial domain of the iris image. This algorithm is de-scribed is as follows:

• Eye image with blink: This type of eye image does notfocus iris properly. The following steps may be appliedto check the quality.

Step1:Evaluate Gray level of pixel in the eyelid area.Step2:If average intensity function I(x, y) of eyelid area isgreater than T then Eye data have flashing noise otherwiseproceed to the preprocessing module. Eq.1 describes thisprocess.

N−1∑

i=0

M−1∑

j=0

I(xi, yj)/M ∗ N > T (1)

where N and M be width and height of an eye image, T isa Threshold value for eyelashes and I(xi,yj) is gray levelvalues of an image.

• Detecting Eyelash: Eyelash interference is serious is-sue in Feature extraction process. Detection of closedeyelash process can be done as follows:Step1:The line detector is used to find out line compo-nent between inner boundary and Iris area.Step2:If the endpoint of line component is under pupilcenter then iris area is interfered by eyelash other wiseiris is in normal condition. The Line detector kernelsare illustrated in Eq.2-3.

Rh = −z1+2z2−z3−z4+2z5−z6−z7+2z8−z9 (2)

Rv = −z1−z2−z3+2z4+2z5+2z6−z7−z8−z9 (3)

where z1 - z9 denote gray level of iris pixels, Rh, Rv repre-sent horizontal and vertical kernel respectively.

• Detecting spectacles reflection: Spectacles reflectionin iris area is another issue in Feature extraction andclassifier design, Figure 3c. Bound a Rectangle on theiris image and check for reflection of glasses in thisportion. If reflection of glass is within tolerance levelthen continued preprocessing otherwise rejected.

• Detect Truncated Iris area: Truncation of iris patternmay present in eye images as shown in Figure 3b. De-tection of truncated iris pattern is as follows:

Step1:Bound a rectangle on the eye image.Step2:Scan the perimeter of a rectangle and count numbersof pixels are satisfied iris threshold value.Step3:If numbers of pixels are above the tolerance level,then reject. Otherwise proceed preprocessing.Eq.4 de-scribes this process.

Trunc(I(x, y)) =

{True Ct(R(I(x, y) <= ti))) > TFalse Otherwise

(4)where I(x,y), ti and T represent iris image, iris thresholdand Tolerance level respectively.Ct and R represent countand rectangle functions respectively.

3.3. Algorithm for iris segmentation

Once the image prominence checking is valid, the nextstep is the iris segmentation process. It is the process tocompletely and efficiently eliminate pupil, eyelashes andother portions that are not necessary for feature extractionand iris matching module. Both the inner and outer bound-ary of a typical iris can be considered as circles, but the twocircles are usually not co-centric. Detection of inner andouter boundary of the iris image is shown in Figure 4. Inthis system, threshold values are used for detecting pupiland iris boundaries. Threshold method is an efficient imageprocessing technique for segmentation process. The stepsinvolved in segmentation are summarized as follows:Step1:Rotate eye image 180 degrees to avoid the detectionof thick eye broses as a pupil boundary. The typical irisimage is shown in Figure 4a.Step2:Detect the position of camera flash using thresholdvalue as given by Eq.5.

Fh(x, y, r) =

{ True I(x, y) > T1 and T2 <C8(x, y, r) < T3

False Otherwise(5)

where x, y and r are co-ordinates and radius of iris image re-spectively, I(x, y) represents gray level value of pixel,C8(x,y,r) is an 8-way circular neighbors. T1,T2 and T3 are thresh-old value of flash, minimum and maximum gray value ofcamera flash boundary respectively.Step3: Gray level value is used to detect iris inner boundary.This method is as follows:

• Set top-left corner coordinate of iris image as I(x,y)and check threshold of its eight neighbors. Eq.6 canperform this operation.

Page 4: Efficient Identification Based on Human Iris …sharat/icvgip.org/icvgip2004/...levels of a concentric circle on an iris image to characterize the texture of the iris. Iris matching

Pupil(x, y, r) =

{True C8(x, y, r) < T1 and r > T2

False Otherwise

(6)where C8(x, y,r) and r represent 8-way circular neighborsand radius of pupil respectively. T1is a threshold value ofpupil area and T2is the minimum radius of pupil. If Pupil

(a) (c) (b)

Figure 4. Iris image after Detecting Inner and Outer

Boundary

(x, y, r) is true then the following procedure is used to findpupil boundary.Step1:Let xp, yp are any coordinate in the pupil.Step2:Increment xp by 1 until right end of pupil boundaryis reached. Let that point be x”, Decrement xp by 1 untilleft end of side pupil boundary is reached, Let that point bex’.x=(x’+x”) /2Step3:Increment yp by 1 until bottom of pupil boundary isreached Let that point be y”, Decrement yp by 1 until topof pupil boundary is reached, Let that point be y’. y = (y’ +y”) / 2Step4:Draw a circle according to pupil boundary threshold.The outer boundary of the iris is more difficult to detectbecause of the low variation of contrast at the outer edge. Itcan be detected by 4-way symmetry method. Procedure fordetection of iris outer boundary is as follows:

• Each pixel within the pupil is considered as a centerand concentric circles are drawn using 4-way circularsymmetry method.

• All the pixel values in the boundary of the drawn circleare checked for the threshold value ti. The circle thatcontains the maximum number of pixel values whichsatisfies the threshold value, is the iris outer boundary.Eq.7 accomplishes this process.

Ir(x, y, Rir) =

{1 Max(C4(x, y, Rir)) < ti0 Otherwise

(7)

where Ir(x,y,Rir) represents the iris position in the imageI(x,y), Rir is the iris radius, ti is the threshold value andC4(x,y,Rir ) is a 4- way Circular method.Step5:If Eyelashes are seen in iris portion, then they areto be removed from the previously processed image. Eye-lashes are masked by threshold value. If eyelash is seen iniris area then those pixels are marked for deletion as given

in Eq.8. Figure 5a and 5b show the image after removinginner, outer boundary and eyelashes.

(a) (b)

Figure 5. After removing inner, outer boundary and eye­

lashes.

Eyelash(x, y) =

{1 T1 < I(x, y) <= T2

0 Otherwise(8)

where I(x, y) represents gray value of pixel. T1 and T2 arethe minimum and maximum threshold value of eyelashes.

3.4. Algorithm for Iris Normalization

As iris is captured in different conditions like non-uniform illumination, eye blink, pupil radius change due tohigh lighting etc, it is possible for the output images to bein different size. This variation may affect the results ofiris matching. To overcome this issue, iris image has beenconverted to standard size with width and height as 360 and48 respectively. Iris dilation or erosion process is also car-ried out to extend or contract iris strip size when fewer/higher amounts of data are presented in the iris portion. Thisprocess is called iris normalization. In this process, imagehas been converted to standard rectangle strip using Radialscan method which is used to collect all pixels in iris por-tion and produced trapezoidal shape image. This procedureis as follows:Step1: Set any point as a reference point in iris portion.Step2: Draw concentric circles using that reference pointFigure 6a.Step3: Repeat step 2 until iris outer boundary is reached.Step4: Collect all pixels, in a circle to construct a trape-zoidal pattern (Figure 6b).Step5: The image right triangular portion is removed andmapped into left portion (from a, b, and c to a’, b’ and c’)of the image. After this process, dilation or erosion modulecan be performed to form a rectangle strip.Figure 6-8 describe various steps are involved in iris nor-malization process.

3.5. Image intensification

Image intensification process is based on spatial domainapproach. The original iris image has low contrast and may

Page 5: Efficient Identification Based on Human Iris …sharat/icvgip.org/icvgip2004/...levels of a concentric circle on an iris image to characterize the texture of the iris. Iris matching

c

b a

a'

c'

b'

(a) (b) (c)

Figure 6. Operation of Radial­scan method.

Figure 7. Output of Radial­scan method

have non-uniform illumination caused due to the irregularposition of the light source. These problems may affectsubsequent feature extraction and pattern matching. In or-der to obtain well-distributed texture image, iris image isenhanced using local histogram equalization and high fre-quency of noises are removed by filtering the image with aGaussian low-pass filter. Figure 9 shows the effect of apply-ing Gaussian low-pass filter and histogram process. Figure10a and 10b show the histogram effect on image intensifi-cation process.

Figure 8. Output of iris Normalization process

3.6. Feature Extraction

Feature extraction is a process to extract informationfrom the iris image. These features cannot be used for re-construction of images. But these values are used in clas-sification. In this method, Gabor filter are used for iris fea-ture extraction. Recently, Gabor filters have been used inrotation-invariant texture classification. In our experiment,we employed a Gabor filtering technique with isotropic2D Gaussian for rotation-invariant classification. Gaborelementary functions are Gaussian modulated by orientedcomplex sinusoidal functions. Gabor frequency domain isdefined in Eq.9.

G(x, y) = g(x, y) exp(−2πj(u · x + v · y))g(x, y) = − exp(x2 + y2/2σ2), j =

√− 1 (9)

The complex function G(x, y) can be split into two parts, theeven and odd filters. Ge(x, y) and Go(x ,y), which are alsoknown as the symmetric and anti-symmetric filters respec-tively. The spatial Gabor filter is described in the Eq.10.

Ge(x, y) = g(x, y) cos(2πf(xcosθ + ysinθ))Go(x, y) = g(x, y) sin(2πf(xcosθ + ysinθ)) (10)

Figure 9. After Image intensification process

(a) (b)

Figure 10. Histogram of before and after applying image

intensification process

where G (x, y) is the Gabor filters kernel, g (x, y) is anisotropic 2D Gaussian function, f is the center frequency,q = arctan (U/V) is the orientation. Iris strip of fixed size360 * 48 is taken for feature extraction process. The centralfrequencies of Gabor filter used in our experiment rangesfrom 2 degree to 64 degree, incremented by its power. Foreach central frequency, filtering is performed withθ = 0 o

to 135o incremented by 45o. Total of 24 kernels are gener-ated for each output image. The rectangular strip is dividedinto 4 sub images. 24 kernels are applied to every sub im-age resulting in 4 x 24 = 96 feature values. Kernels withmaximum number of positive coefficients were chosen forprocessing.

3.7. Iris Matching

In this module, an iris strip is represented as a 1D fea-ture vector of size 96. Feature set FSi = {f1,f2. . .,f96} isused for recognition process. BPN (Back propagation net-work) has been used for the iris matching process. For thisprocess neural network models are created as 96 neuronsin input layer, two hidden layers each has 12 neurons andone output neuron. Soft-limiting sigmoid function is usedfor activation of the each neuron in the network, shown inEq.11.

S(Ij) = 1/(1 + exp(−Ij + θj))/θo (11)

where Ij j = 1,2,3. . . Njrepresents the input to the activationelement of each node in layer J of the network,θj is anoffset andθo controls the shape of the sigmoid function.The input to a node in any layer is the weighted sum of theoutput from the previous layer, shown in Eq.12.

Ii =∑Nk

k=1 WjkOk

Eq = 1/2∑

(Nq − Oq)2(12)

Page 6: Efficient Identification Based on Human Iris …sharat/icvgip.org/icvgip2004/...levels of a concentric circle on an iris image to characterize the texture of the iris. Iris matching

where Nq is the number of nodes in output layer, Eq is theerror in the network. In the learning process, the commonlearning process for BPN is accomplished using random ini-tialization of the weight vectors. In the recognition process,we set the matching threshold level. This determines theerror tolerance of the system with respect to the feature forwhich the network is trained, and used to determine whetherthe final result is accepted / rejected. For any recognitionsystem that is used for security applications, this error tol-erance should be minimal and therefore the setting of thismatching threshold is a crucial factor.

4. Experiments and Results

The implementation has been done in java and the timeefficiency has been based on observation made on PentiumIV 2.4 GHz machine with 256 MB RAM. Experimentationwere performed with different eye images in different cri-terion like normal, contact lens, spectacles, diseases etc.,Capturing eye images in different situation provides a morechallenges to this approach. 1500 iris patterns are takenand features were extracted and stored in an encrypted file.These feature sets were referred as fs1,fs2. . .fs1500. Iris im-age database details are listed in Table 1.

Table 1. Iris image with different criterion

Normal Iris imageAge Gender % of

ClassDuration Location

10 – 20 MaleFemale

1010

15 to 90daysinterval

30% outdoor70% Indoor

21 – 40 MaleFemale

2515

> 40 MaleFemale

3010

Iris with eye glasses / contact lensAge Gender % of

ClassDuration Location

21-60 MaleFemale

5050

15 to90 daysinterval

100 % Indoor

Disease affected eyeAge Gender % of

classDuration Location

40-60 MaleFemale

5050

15 to90 daysinterval

100 % indoor

This algorithm was tested in two different phases of op-erations, namely, recognition phase and confirmation phase.

In recognition phase, a person feature set was sequentiallychosen and compared with remaining set of 1500 featuresets. That is an iris feature set is compared with many irisesfeature sets. BPN threshold values were adjusted accord-ing to the different criterion of irises. During the training,the error tolerance allowed for the system is 0.00047. Inthe recognition process, the matching threshold is set whichadds to the total error tolerance of the system. The match-ing threshold has been set to 0.0000124 based on exper-imental results. This gives an overall error tolerance of0.0004824. This algorithm provides Non-matching ratio(NMR) of 99.21 and Matching ratio (MR) of 0.79 in nor-mal criterion. The Results of recognition phase are given inTable 2. Confirmation phase is based on one-to-one match-ing. To perform this phase of operation, a feature set wastrained and compared with same set of feature in differenttime and conditions. This phase provides Genuine Acceptrate (GAR) of 99.21 and False Accept rate (FAR) of 0.79 innormal condition. Table3 shows the results of confirmationphase.

Table 2. Iris Recognition phase Results

Criterion NMR MRNormal eye 99.21% 0.79%Eye with Spectacles and contact lens98.66% 1.34%Disease affected eye 99% 1.00%

4.1. Space, Time and ROC Analysis

Storage requirement for iris matching is considerably re-duced in this new approach. 96 features were used to clas-sify different persons. That is, 96 * 16 bits = 192 bytes areneeded per iris. Totally, 1500 eye images features were ex-tracted and stored in 281.25 KB of an encrypted file. Theexecution times for the image prominence checking and irisrecognition processes were measured and listed in Table 4.Receiver operating characteristics curve (ROC) is illustratedin Figure 11. ROC shows that this proposed system can ef-ficiently work in different conditions.

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

False Accept Rate

Genu

ine A

ccep

t Rat

e

ROC ANALYSIS

Line of equal distributionROC

Figure 11. ROC Analysis of the iris system

Page 7: Efficient Identification Based on Human Iris …sharat/icvgip.org/icvgip2004/...levels of a concentric circle on an iris image to characterize the texture of the iris. Iris matching

Table 3. Iris Confirmation phase Results

Criterion GAR FARNormal eye 99.21% 0.79%Eye with Spectacles and contact lens98% 2%Disease affected eye 98% 2.00%

Table 4. Time complexities of iris recognition processes

Steps in iris recognition process Time(Seconds)Eye with blink and Detecting eyelashTruncation iris and Spectacles reflec-tion

0.350.40

Detect flash and Remove eyelashesNormalization-Dilation- intensificationFeature extraction and Recognition

1.22.54.0

5. Conclusions

This paper proposes a person identification system (PID)that works with non-illumination low-resolution eye im-ages. The proposed algorithm requires 96 features for iden-tifying a person. This algorithm uses Gabor filters to extractfeatures of iris. Each iris image is extracted and then a fixedlength feature set is obtained. Conventional neural networkis used for iris matching. This method has proven that thespatial domain image processing operations can be used foriris prominence checking and feature extraction. Comparewith other methods, the space complexity of this methodis considerably reduced. In this approach, Image validitychecking module is used to prevent any illegal entries storedin an iris-encrypted file. This algorithm provides accuracyof 99.21% in normal criterion.

References

[1] Boles W.W and Boashash. A Human Identifica-tion Technique Using Images of the Iris and WaveletTransform In IEEE Trans. on Signal Processing.46(4),pp.1185-1188, 1998.

[2] Chitra A and Bremananth R. Secure PID using iris pat-tern based on circular symmetric and Gabor filters InProceedings of Inter. Conf. Advanced Computing andCommunication.pp.36,2002.

[3] Daugman J and Dowing C. Epigenetic randomness,complexity and singularity of human iris patterns. InProceedings of the Royal Society, B, 268, BiologicalSciences.PP 1737 - 1740, 2001.

[4] Hallinan P. W Recognizing Human Eyes. InSPIE Proc.Geometric Methods in Computer Vision.1570,pp.214-226,1991.

[5] John G.Daugman. High Confidence Visual Recogni-tion of Persons by a Test of Statistical Independence.IEEE Trans. on PatternAnalysis and Machine Intelli-gence.15(11),pp.1148-1161,1993.

[6] Li Ma, Tieniu Tan, Yunhong Wang, and Dexin Zhang.Personal identification based on iris texture analysis.IEEE, Pattern analysis and Machine Intelligence.Vol.25,pp.1519-1533, Dec-2003.

[7] Liu C and H.Wechsler. GaborFeature Based classi-fication Using the Enhanced Fisher Linear Discrimi-nant Model for Face Recognition.IEEE Trans. ImageProcessing.vol. 11, no. 4, pp. 467- 476, 2002.

[8] McHugh J, J. Lee and C. Kuhla. Handled Iris Imag-ing Apparatus and Method United states Patent No.6289113, 1998.

[9] Rafael C Gonzalez, Richard E Woods Pearson,2003.

[10]Rozmus J and M.Salgnicoff. Method and Apparatus forilluminating and imaging Eyes through eyeglass Unitedstates patent No. 6069967, 1997.

[11]Sanchez - Reillo R and C. Sanchez-Avila. Iris Recogni-tion with Low Template Size InProc. Int. Conf. Audioand Video-Based Biometric Person Authentication.pp.324-329, 2001.

[12]Shinyoung Lim , Kwanyong Lee, Okhwan Byeon, andTaiyun Kim. Efficient Iris Recognition through Im-provement of Feature Vector and Classifier.ETRI J.,vol. 23, no. 2, pp. 61-70, 2001.

[13]Wildes R.P. Iris recognition: An emerging BiometricTechnology. Proceeding of the IEEE.Vol.85PP.1348-1363 Sept.1997.

[14]Williams,G.O. Iris Recognition TechnologyIEEEAerospace and Electronics Systems Magazine.12(4),pp.23-29, 1997.