[ieee 2009 fifth international conference on information assurance and security - xi'an china...

3
An Improved Iris Recognition Method Based on Gray Surface Matching Dong-Mei Wu College of Communication and Information Engineering Xian University of Science and Technology Xian, China e-mail: [email protected] Jiang-Nan Wang College of Communication and Information Engineering Xian University of Science and Technology Xian, China e-mail: [email protected] AbstractA new method is proposed to overcome illumination intensity difference influences the recognition result which occurs during collecting session in this paper. The problem is not effective solved in original algorithm. This algorithm proposes to match after moving two matching feature surfaces to one equivalence gray plane. It also uses the theory of gray surface matching based on feature analysis but another distance is considered for reducing calculation quantity. Firstly, it calculates the gray difference of two pixels which comes from doing subtraction between two different surfaces and gets the gray difference surface. Secondly, it computes the square of the gray difference surface and considers this square as the distance between the two surfaces. At last, it compares the result with a given threshold to implement iris recognition. Simulation and experimental results are shown to illustrate this method performance for both correct recognition rate and the whole recognition time. Keywords- equivalence gray plane; gray surface; iris recognition; illumination intensity I. INTRODUCTION In recent years, as information technology and the growing security needs, the technology based on biometric identification got a rapid development. Due to its high reliability, non-contact and some other advantages, iris recognition technology has become a hot topic in the field of biometric identification. There are three algorithms can be regarded as the representative in traditional iris recognition algorithms[1]-[5]. In their iris recognition technologies, they usually use various math methods to extract the eigenvalue of original iris images, and compare them with the eigenvalue of the iris images registered in advance. Then they determine whether the two irises come from the same eye according to the degree of matching result. This paper improves the recognition rate upon the original algorithm which based on surface matching[6]. The main idea of the algorithm is: imagine the iris region is a gray surface in the three-dimensional(3D) space(the position and gray value of the iris), when two irises come from the same eye, because they have the same veins, the two gray surfaces should have the same or similar shape. While when they come from different eyes, they have different shape. So we can determine whether the two irises come from the same eye or not, if we are able to detect the differences about the gray surface in the 3D space. In the original method, there is no effective solution about that the illumination intensity is not exactly the same during collecting the iris. Therefore, the identification effect is not very well. In order to reduce this factor influences recognition result, this paper proposed to move two matching surfaces to one equivalence gray plane. Because the matching operation is done on iris gray image, it doesn't have to extract the eigenvalue. The whole iris area, instead of a group of numbers, takes part in matching process. What's more, it's very difficult to imitate live iris image, so the notable advantage of this method is to improve the system's safety and reduce computation. II. IRIS IMAGE PREPROCESSING The iris image preprocessing mainly includes the location of the iris region and the translation to normalized iris image. In order to emphasize the veins of iris image which is useful for the matching process, it enhances the normalized image. And because the original iris images we obtained are not only including iris, but also have some information of pupil, eyelid, eyelash, white etc, which is useless, as shown in Fig.1. And the location, size and area of the iris in the whole image are different. So the iris image must be preprocessed before distilling the iris area. Firstly, the iris should be segmented, then compensate the difference caused by parallel moving, size difference etc. In the end, normalize the iris image. Figure 1. Original iris image Figure 2. Image of iris localization 2009 Fifth International Conference on Information Assurance and Security 978-0-7695-3744-3/09 $25.00 © 2009 IEEE DOI 10.1109/IAS.2009.57 247

Upload: jiang-nan

Post on 13-Dec-2016

215 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: [IEEE 2009 Fifth International Conference on Information Assurance and Security - Xi'An China (2009.08.18-2009.08.20)] 2009 Fifth International Conference on Information Assurance

An Improved Iris Recognition Method Based on Gray Surface Matching

Dong-Mei WuCollege of Communication and Information

EngineeringXi’an University of Science and Technology

Xi’an, Chinae-mail: [email protected]

Jiang-Nan WangCollege of Communication and Information

EngineeringXi’an University of Science and Technology

Xi’an, Chinae-mail: [email protected]

Abstract—A new method is proposed to overcome illuminationintensity difference influences the recognition result whichoccurs during collecting session in this paper. The problem is noteffective solved in original algorithm. This algorithm proposes tomatch after moving two matching feature surfaces to oneequivalence gray plane. It also uses the theory of gray surfacematching based on feature analysis but another distance isconsidered for reducing calculation quantity. Firstly, it calculatesthe gray difference of two pixels which comes from doingsubtraction between two different surfaces and gets the graydifference surface. Secondly, it computes the square of the graydifference surface and considers this square as the distancebetween the two surfaces. At last, it compares the result with agiven threshold to implement iris recognition. Simulation andexperimental results are shown to illustrate this methodperformance for both correct recognition rate and the wholerecognition time.

Keywords- equivalence gray plane; gray surface; irisrecognition; illumination intensity

I. INTRODUCTION

In recent years, as information technology and thegrowing security needs, the technology based on biometricidentification got a rapid development. Due to its highreliability, non-contact and some other advantages, irisrecognition technology has become a hot topic in the field ofbiometric identification.

There are three algorithms can be regarded as therepresentative in traditional iris recognition algorithms[1]-[5].In their iris recognition technologies, they usually usevarious math methods to extract the eigenvalue of originaliris images, and compare them with the eigenvalue of the irisimages registered in advance. Then they determine whetherthe two irises come from the same eye according to thedegree of matching result.

This paper improves the recognition rate upon theoriginal algorithm which based on surface matching[6]. Themain idea of the algorithm is: imagine the iris region is agray surface in the three-dimensional(3D) space(the positionand gray value of the iris), when two irises come from thesame eye, because they have the same veins, the two graysurfaces should have the same or similar shape. While whenthey come from different eyes, they have different shape. Sowe can determine whether the two irises come from the sameeye or not, if we are able to detect the differences about the

gray surface in the 3D space. In the original method, there isno effective solution about that the illumination intensity isnot exactly the same during collecting the iris. Therefore, theidentification effect is not very well. In order to reduce thisfactor influences recognition result, this paper proposed tomove two matching surfaces to one equivalence gray plane.

Because the matching operation is done on iris grayimage, it doesn't have to extract the eigenvalue. The wholeiris area, instead of a group of numbers, takes part inmatching process. What's more, it's very difficult to imitatelive iris image, so the notable advantage of this method is toimprove the system's safety and reduce computation.

II. IRIS IMAGE PREPROCESSING

The iris image preprocessing mainly includes the locationof the iris region and the translation to normalized iris image.In order to emphasize the veins of iris image which is usefulfor the matching process, it enhances the normalized image.And because the original iris images we obtained are notonly including iris, but also have some information of pupil,eyelid, eyelash, white etc, which is useless, as shown in Fig.1.And the location, size and area of the iris in the whole imageare different. So the iris image must be preprocessed beforedistilling the iris area. Firstly, the iris should be segmented,then compensate the difference caused by parallel moving,size difference etc. In the end, normalize the iris image.

Figure 1. Original iris image

Figure 2. Image of iris localization

2009 Fifth International Conference on Information Assurance and Security

978-0-7695-3744-3/09 $25.00 © 2009 IEEE

DOI 10.1109/IAS.2009.57

247

Page 2: [IEEE 2009 Fifth International Conference on Information Assurance and Security - Xi'An China (2009.08.18-2009.08.20)] 2009 Fifth International Conference on Information Assurance

Figure 3. Normalized iris image

Figure 4. Image of histogram equalization

This location method used in this paper integrated thecharacter of eyes. And it uses the principle of a circle can bedetermined by three points which are not on a common lineto decide the inner and outer edge of the iris. The algorithmfirstly located a random point in the pupil, and then uses anedge detection template to search the three different points todetect inner edge.

As the gray value of the pupil is the smallest, it uses thesum template to find the minimum point which is in the pupil.And the gray value of the boundary between the inner andouter edge changes a lot. So after finding the point in thepupil, it searches three maximum grads along three differentdirections. Then it uses the three points to locate the iris, asshown in Fig. 2.

In order to achieve accurate matching between two irises,it needs to normalize the two images. It projects the irisregion to a polar coordinates[7]. The mapped relation is:

( , ) (1 ) ( ) ( )p qx r r x rxθ θ θ= − + (1)

( , ) (1 ) ( ) ( )p qy r r y ryθ θ θ= − + (2)

( ( ), ( ))p pp x yθ θ= and ( ( ), ( ))q qq x yθ θ= are the center of thepupil. There may be some shelters from the eyelash andeyelid. So we need to choose of the normalized region toreduce this factor influences the recognition result. In thispaper 0 0[ 210 ,30 ]− area was chosen. The normalized result wasshown in Fig. 3.

It uses histogram equalization to enhance the iris image,as shown in Fig. 4.

III. FEATURE SURFACE ANALYSIS

The wave of the surface in 3D gray space iscorresponding with the iris veins. It uses the level directionof the image as X axis, vertical direction as Y axis, and thegray value as Z axis to create Cartesian coordinate system.And each pixel of the iris image can described in this systemas shown as shown in Fig. 5. And this surface is calledfeature surface in this paper.

If the two feature surfaces have the same or similar shape,the location in the 3D gray space must be overlapped orparalleled. The difference of the two images can be measuredby the distance of the two surfaces.

Figure 5. Gray surface of iris

Due to the illumination differences during collectingsession, the veins of the iris surface will also change. Thiskind of difference can be represented as the average grayvalue is different. According to the experiment results, thisdifference will influence the match result greatly. Thisalgorithm proposes to move two matching feature surfaces toone equivalence gray plane before matching, which willreduce the influence of the recognition result.

IV. MATCHING

It is supposed that the feature surface of enter iris is P,the other that we have saved is Q. We can find both of thefeature surface is the same or a little different if P and Q arethe same irises. If P and Q are coming form different iris, thewave of the feature surface is different from each othercompletely. The difference of space surface in shape canmeasure by one of distance between them in mathematics. Inthis paper, the distance is using in this way. Firstly, doingsubtraction between P and Q and gets the difference surface.Then it calculates square of the difference value andconsiders the square as the distance between P and Q. Tocompare with original algorithm, this method can reducecomputing quantity, enhance difference between thedifferent feature surfaces, and solve the problem of featuresurface changes as the illumination intensity difference whenwe collect the iris effectively.

After above analysis, if P and Q are the same iris can bebelow three cases:

1) Doing subtraction between P and Q, the surface is thesame if the value of difference surface is zero.

2) After P minuses Q, the surface is the same if the valueof difference surface is a constant. (May be one of featuresurface’s illumination intensity is more than another one).

3) After P minuses Q, it is a ruleless surface (May be theillumination intensity is different between them and isvolatile). Then it should consider the degree of the graysurface changing. If the degree of the gray surface changingis a little, P and Q are coming from same iris, otherwise, theycome from different iris.

This matching method can implement in the followingtwo steps:

1) It computes the gray difference of two pixels whichare at the same position from two different images. It can bedescribed as expressions (3). ijR is the gray differencebetween P and Q.

ij ij ijR P Q= − (3)

248

Page 3: [IEEE 2009 Fifth International Conference on Information Assurance and Security - Xi'An China (2009.08.18-2009.08.20)] 2009 Fifth International Conference on Information Assurance

2) It calculates the square of ijR as expressions (4).W andH are the width and height of the image.

1 12

0 0

1*

W H

iji j

Y RW H

− −

= =

= ∑∑ (4)

To compare Y with a threshold, we can distinguish if Pand Q is the same iris.

As the eyes or head of the volunteers’ rotating whencollecting iris, the veins position of two irises may notcorresponding. In the other word, the two iris gray surfacesare not in the same reference frame. So Y will increasealthough P and Q are the same irises. In order to make P andQ in the same reference frame, it needs to rotate an angle tothe two iris images. If the two iris images are the same, theremust be a position that can make the two gray surfacesachieve the best matching. To solve the problem of rotatinginfluence to iris recognition, it needs to make a translation tonormalized iris image after compute Y (Certainly, maybe Pand Q are not the same iris or they are the same iris but notin the same reference frame). And it stops when it searchesthe minimum of Y.

V. EXPERIMENTAL RESULTS AND ANALYSIS

The CASIA iris database(Version 1.0) of Institute ofAutomation Chinese Academy of Sciences was used in thispaper. It has 756 images, include 108 different types of irisswatch from 80 persons. Each kind of iris image has 7images which were collected in two sessions and theirresolution is 320 by 280 pixels. 8046 matching experimentswere done to the 108 different kinds of iris image whichthere are 2268 matching experiments in the same types of irisand 5778 matching experiments in the different types of iris.Matching in the same types of iris is to say doing thoseexperiments with different iris images which come from thesame iris. Matching in the different types of iris means thatdoing those experiments with different types of iris images.

This algorithm through moving two matching featuresurfaces to one equivalence gray plane first and then doingsubtraction between two matching feature surfaces toovercome illumination intensity different to influence therecognition result. This method is demonstrated feasiblethrough experiment. When a appropriate threshold isgiven(In this paper threshold = 6501), correct recognitionrate 91.61%CRR = in this method and the original algorithmcorrect recognition rate ' 88.03%CRR = . The whole recognitionprocess of this algorithm takes about 298ms on the averagewhile the original algorithm takes about 1020ms on theaverage.

All the experiments are implemented in Visual C++ 6.0.The CPU of computer is P4 2.0 GHz, the operating system isWindows XP Professional.

VI. CONCLUSION

A new method is proposed to solve illumination intensitydifference which will influence the recognition result in thispaper. This algorithm through moving two matching feature

surfaces to one equivalence gray plane to overcome thisfactor influences the recognition result. In order to reducecalculation quantity and enhance the difference between twodifferent kinds iris, this paper computes the square of graydifference surface. The result demonstrates this algorithm isbetter than original algorithm in correct recognition rate andconsumed time.

ACKNOWLEDGMENT

The authors are grateful to Institute of AutomationChinese Academy of Sciences for supplying the iris database;and to Scientific Research Plan Projects Of ShaanxiProvincial Department Of Education NO.08JK356 forsupporting this study.

REFERENCES

[1] John Daugman, “High confidence visual recognition of persons by atest of statistical independence,” IEEE Transactions on PatternAnalysis and Machine Intelligence, vol. 15, pp. 1148–1161, 1993.

[2] John Daugman, “Recognizing persons by their iris patterns,”Information Security Technical Report, vol. 3, pp. 33–39, 1998.

[3] John Daugman, “How iris recognition works,” IEEE Transactions onCircuits and Systems for Video Technology, vol. 14, pp. 21–30, 2004.

[4] Wildes R P, “Iris recognition: an emerging biometric technology,”Proceeding of the IEEE, vol. 85, pp. 1348–1363, 1997.

[5] W W Boles, B Boashash, “A human identification technique usingimages of the iris and wavelet transform,” IEEE Transactions onSignal Processing, vol. 46, pp. 1185–1188, 1998.

[6] YUAN Weiqi, GAO Binxiu, “Iris Recognition Method Based onSurface Match,” Computer Engineering, vol. 33, pp. 176–17, 2007.

[7] Yin Yong, Xu Chang, “An Iris Recognition Algorithm Based onCross Power Spectrum,” Journal of Image and Graphics, vol. 12, pp.854–858, 2007.

249