[ieee 2010 2nd international symposium on information engineering and electronic commerce (ieec) -...

5
Robust Eye Localization on Multi-View Face in Complex Background Based on SVM Algorithm Fu You-jia, Li Jian-wei, Xiang Ru-xi Key Laboratory of Opto-electronic Technology & System of the Ministry of Education Chongqing University Chongqing, China [email protected] Abstract—Focused on multi-view eye localization in complex background, a new detection method based improved SVM is proposed. First, the face is located by AdaBoost detector and the eye searching range on the face is determined. Then, the crossing detection method, which uses the feature of eye and brow integrated as a whole, and the improved SVM detectors trained by large scale multi-view eye examples are adopted to find the candidate eye regions. Based on the fact that the window region with higher weight in SVM classifier is relatively closer to the eye, and the same eye tends to be repeatedly detected by near windows, the candidate eye regions are filtered to refine the eye location on the multi-view face. Experiments show that the method has very good accuracy and robustness to the eye localization with various face post and expression in the complex background. Keyword: eye detection; eye localization; multi-view eye detection; multi-view eye localization; support vector machine I INTRODUCTION Face recognition as the front subject of pattern recognition and artificial intelligence has a broad application in the human- machine interface, the biometric information security and so on. Eyes as the important reference points of face size normalization in face recognition, its localization is essential. The past eye localization algorithm is focused mainly on frontal face, but with the expanding of the face applications range and the increase of the need for developing actual system, multi-view face recognition has been gradually attached importance. Compared with the frontal eye localization, the multi-view eye localization is more difficult to use the geometric relationship between eyes, and the eye shape is also more complicated, so the research is relatively weak. At present, the eye localization methods include the algorithms based on knowledge, which utilize the eye’s features of grayscale, texture, shape or outline such as integral projection [1], template matching [2] and edge extraction [3] etc; the algorithms based on statistical learning, which regard eye as a pattern and extract its common feature by statistical learning from lots of eye and non-eye examples under different conditions such as artificial neural network (ANN), support vector machine (SVM) [4] and AdaBoost [6, 7]. The former is easy to be influenced by environment, and also sensitive to the change of face pose. The latter can locate the eye well in a variety of complex background and so becomes an effective way to solve the complicated detection problems. Literature [4] used SVM classifier to detect eyes and got 87.6% of the detection rate on frontal face. Literature [5] adopted an improved AdaBoost algorithm combination with the SVM to locate the eyes very accurately, but it was only suitable for the eyes with small pose of face. Literature [6] used the AdaBoost classifier trained by substantial pair of eyes examples to segment the eye regions, and used the fast radial symmetry operator to locate the eye centers. It had a very high localization rate on the face with the pose range [-22.5, 22.5] and a good achievement with [-45, 45]. But because the algorithm need the two eyes be both visible, so the eye localization on face with larger pose or with rotation both in and off plane was not discussed. This article proposes a novel approach for eye localization based on statistical learning. First, the eye searching range is determined on the face detected by the AdaBoost detector [7]. Then the crossing detection method which uses the feature of whole eye and brow, and the improved SVM algorithm with large scale training eye examples of multi-view faces are adopted to find the candidate eye regions. After filter and merge, the best eye regions are obtained. The approach need no pair of eyes and suits for the eye localization on multi-view face (including full profile face) with tilting angle of [-20°, 20°]. The experiments on the FERET96 and the database of labeled faces in the wild (LFW) [8] show its robustness and effectiveness in complex environment. II FRAME OF MULTI-VIEW EYE LOCALIZATION We construct a coarse-to-fine strategy to gradually reduce the detection range of the eye. The algorithm process is shown in Fig.1. First, the face region is determined by AdaBoost detector, and the eye searching range is limited to some region on the upper half of face, which eliminates the interference of non-face background, nostrils, hair and mouth. Second, a crossing detection method is adopted for searching the eye as follow: The detection window scans horizontally the eye searching region to determinate the horizontal region of candidate eye by the horizontal eye region classifier, after that, the other detection window scans vertically the detected horizontal region to obtain the candidate eye regions by the eye classifier. The horizontal window contains the eye and brow above it, which help to distinguish the eye from background. The vertical window focuses on the distinction between eye 978-1-4244-6974-1/10/$26.00 ©2010 IEEE

Upload: ru-xi

Post on 25-Feb-2017

214 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: [IEEE 2010 2nd International Symposium on Information Engineering and Electronic Commerce (IEEC) - Ternopil, Ukraine (2010.07.23-2010.07.25)] 2010 2nd International Symposium on Information

Robust Eye Localization on Multi-View Face in Complex Background Based on SVM Algorithm

Fu You-jia, Li Jian-wei, Xiang Ru-xi Key Laboratory of Opto-electronic Technology & System of the Ministry of Education

Chongqing University Chongqing, China

[email protected]

Abstract—Focused on multi-view eye localization in complex background, a new detection method based improved SVM is proposed. First, the face is located by AdaBoost detector and the eye searching range on the face is determined. Then, the crossing detection method, which uses the feature of eye and brow integrated as a whole, and the improved SVM detectors trained by large scale multi-view eye examples are adopted to find the candidate eye regions. Based on the fact that the window region with higher weight in SVM classifier is relatively closer to the eye, and the same eye tends to be repeatedly detected by near windows, the candidate eye regions are filtered to refine the eye location on the multi-view face. Experiments show that the method has very good accuracy and robustness to the eye localization with various face post and expression in the complex background.

Keyword: eye detection; eye localization; multi-view eye detection; multi-view eye localization; support vector machine

I INTRODUCTION

Face recognition as the front subject of pattern recognition and artificial intelligence has a broad application in the human-machine interface, the biometric information security and so on. Eyes as the important reference points of face size normalization in face recognition, its localization is essential. The past eye localization algorithm is focused mainly on frontal face, but with the expanding of the face applications range and the increase of the need for developing actual system, multi-view face recognition has been gradually attached importance. Compared with the frontal eye localization, the multi-view eye localization is more difficult to use the geometric relationship between eyes, and the eye shape is also more complicated, so the research is relatively weak.

At present, the eye localization methods include the algorithms based on knowledge, which utilize the eye’s features of grayscale, texture, shape or outline such as integral projection [1], template matching [2] and edge extraction [3] etc; the algorithms based on statistical learning, which regard eye as a pattern and extract its common feature by statistical learning from lots of eye and non-eye examples under different conditions such as artificial neural network (ANN), support vector machine (SVM) [4] and AdaBoost [6, 7]. The former is easy to be influenced by environment, and also sensitive to the change of face pose. The latter can locate the eye well in a variety of complex background and so becomes an effective

way to solve the complicated detection problems. Literature [4] used SVM classifier to detect eyes and got 87.6% of the detection rate on frontal face. Literature [5] adopted an improved AdaBoost algorithm combination with the SVM to locate the eyes very accurately, but it was only suitable for the eyes with small pose of face. Literature [6] used the AdaBoost classifier trained by substantial pair of eyes examples to segment the eye regions, and used the fast radial symmetry operator to locate the eye centers. It had a very high localization rate on the face with the pose range [-22.5, 22.5] and a good achievement with [-45, 45]. But because the algorithm need the two eyes be both visible, so the eye localization on face with larger pose or with rotation both in and off plane was not discussed.

This article proposes a novel approach for eye localization based on statistical learning. First, the eye searching range is determined on the face detected by the AdaBoost detector [7]. Then the crossing detection method which uses the feature of whole eye and brow, and the improved SVM algorithm with large scale training eye examples of multi-view faces are adopted to find the candidate eye regions. After filter and merge, the best eye regions are obtained. The approach need no pair of eyes and suits for the eye localization on multi-view face (including full profile face) with tilting angle of [-20°, 20°]. The experiments on the FERET96 and the database of labeled faces in the wild (LFW) [8] show its robustness and effectiveness in complex environment.

II FRAME OF MULTI-VIEW EYE LOCALIZATION

We construct a coarse-to-fine strategy to gradually reduce the detection range of the eye. The algorithm process is shown in Fig.1. First, the face region is determined by AdaBoost detector, and the eye searching range is limited to some region on the upper half of face, which eliminates the interference of non-face background, nostrils, hair and mouth. Second, a crossing detection method is adopted for searching the eye as follow: The detection window scans horizontally the eye searching region to determinate the horizontal region of candidate eye by the horizontal eye region classifier, after that, the other detection window scans vertically the detected horizontal region to obtain the candidate eye regions by the eye classifier. The horizontal window contains the eye and brow above it, which help to distinguish the eye from background. The vertical window focuses on the distinction between eye

978-1-4244-6974-1/10/$26.00 ©2010 IEEE

Page 2: [IEEE 2010 2nd International Symposium on Information Engineering and Electronic Commerce (IEEC) - Ternopil, Ukraine (2010.07.23-2010.07.25)] 2010 2nd International Symposium on Information

and brow. After the candidate eye regions are filtered based on the fact that the window region which has higher weight in the SVM classifier is relatively closer to the eye and the fact that the same eye tends to be repeatedly found by near windows, the exact eye regions are located.

Fig. 1. Process of multi-view eye localization

III IMPROVED SVM LEARNING WITH LARGE SCALE TRAINING DATA

A The SVM theory and its limitations Targeted at the classification problem for two classes, SVM

maps the low-dimensional vectors to the high-dimensional space, so that the vectors not separable in the low-dimensional space become as much as possible linearly separable in the high-dimensional space. By constructing an optimal separating surface in the high-dimensional space, SVM achieves the trade-off in ensuring the two classes to have the maximum margin and minimum error rate of classification, and so implements the minimization of structural risk. Given the data set: D = {(xi, yi)}l

i=1, xi Rd, yi {-1, 1}, to find the optimal separating surface, the SVM problem can be converted into the following quadratic programming problem:

..tsliC

y

eQ

i

T

TT

,,...,2,1,00

)21min(

=≤≤=

+

αα

ααα (1)

where Tl ),...,,( 21 αααα = , iα is the Lagrange multiplier;

Tlyyyy ),...,,( 21= ; e is the l-dimensional column vector, which

each row is 1; C is the penalty coefficient; Q is the l×l matrix, whose component ),( jijiij xxKyyQ = is the symmetric positive definite kernel matrix.

Solve the (1) to obtain the classifier :

)),(()(1=

−=l

iiii bxxKysignxy α (2)

where xi with 0≠iα is called support vector, obviously, only the support vectors in the expression (2) are meaningful.

By the programming problem (1), we can see that when the number of samples l in the training data set is very large, the matrix Q will be also very large, and result in much of memories and very long training time even not impossible. So SVM is only very effective on small-scale data set.

B Extract the boundary samples By the expression (2), the optimal separating surface is just

determined by the support vectors. Because the support vectors can only exist on the boundaries of the two classes, we choose the boundary samples as the training ones.

In order to find the samples on the boundaries between the two classes, it is desirable to choose the ones nearest to the center of the other class as the boundary samples. Since SVM seeks for the optimal separating surface in high-dimensional space, the ordinary Euclidean distance does not apply. So the samples must be mapped to the high-dimensional space.

Let Ck is the center of the class k, lk is the number of samples in the class k, xkj is the sample j in the class k, and 2

kjdis the square distance of xkj to the center of the other class, here, k =1,2. Then:

2kjd

2

1

2)(1)()(

=Φ−Φ=−Φ=

kl

ii

kkjkkj x

lxCx

>ΦΦ<+−====

kkk l

ii

l

ii

k

l

iikj

kkjkj xx

lxxK

lxxK

112

1)(,)(1),(2),(

where k =1 when k=2, k =2 when k=1. The 3rd item does not change with sample j, so it can be ignored. Then:

=

−=k

l

iikj

kkjkjkj xxK

lxxKd

1

2 ),(2),( (3)

where K is the kernel function. Put the training samples into the formulas (3) and sort the results in each class in the ascending order. From each queue, some front samples are selected as the boundary samples of each class.

C Improved SVM algorithm Literature [9] selected a small set randomly from the large

sample set for SVM training, and then adjusted iteratively the initial set in the whole training sample according to KKT conditions until the convergence, so as to solve the problem of SVM learning with large scale training data. Literature [10] presented that in the training of incremental SVM, the misclassified samples were often at the boundaries, and influenced by noise when the sample set was large, but in the case of small scale samples, they reflected the details of training samples. Though the misclassified samples were added to the historical training set and trained again, the support vectors would be closer to the solution, and the accuracy of classifier would be enhanced with the increment of training samples, as to approach the expectancy risk of the sample space.

In according to the discussion above, our improved SVM training algorithm is as follows, where i is the ordinal number of iteration:

1) According to the expression (3) choose m boundary samples from the positive training set A+ and the negative one A- to form the work set Aw+

(0) and Aw-(0).

2) Solve the programming problem (1) to obtain the

Page 3: [IEEE 2010 2nd International Symposium on Information Engineering and Electronic Commerce (IEEC) - Ternopil, Ukraine (2010.07.23-2010.07.25)] 2010 2nd International Symposium on Information

support vector sets Asv+(i) , Asv-

(i) and classifier ;3) Classify A+ and A- by , Let B+

(i) be the rejected true sample set, B-

(i) be the accepted false sample set; 4) If B+

(i) Ø and B-(i) Ø, according to the expression (3)

choose respectively n samples nearest to B-(i) and B+

(i) from B+

(i) and B-(i) to form B+n

(i) and B-n(i), let Aw+

(i+1)= Asv+(i)+ B+n

(i),Aw-

(i+1)= Asv-(i)+ B-n

(i), go to 25) If B+

(i)=Ø and B-(i) Ø, according to the expression (3)

choose n samples nearest to Asv+(i) from B-

(i) to form B-n(i), let

Aw+(i+1)= Asv+

(i), Aw-(i+1)= Asv-

(i)+ B-n(i), go to 2

6) If B+(i) Ø and B-

(i) = Ø, according to the expression (3) choose n samples nearest to Asv-

(i) from B+(i) to form B+n

(i), let Aw+

(i+1)= Asv+(i) + B+n

(i), Aw-(i+1)= Asv-

(i), go to 27) If B+

(i) = Ø and B-(i) = Ø, the training is end and the

last classifier is the one we need. If the number of samples in B+

(i) or B-(i) is less than n, then

let B+n(i) = B+

(i) or B-n(i) = B-

(i).

IV IMPLEMENT OF MULTI-VIEW EYE DETECTION ALGORITHM

A The eye searching regions The face in the imputed image is located by AdaBoost

detector. As the eye is only situated on the upper half of face and under the forehead, so just this part of region need to be searched for eye. Let hface and wface are the height and width of the face rectangular region. The origin of the rectangle locates its upper-left corner. Let P(x, y) is any point in the searching region, then 0 x wface and 11/48hface y 7/12hface (Fig.4 a). This searching region not only covers the eyes and brows on the multi-view face with rotation in plane between -20º and 20º, but also excludes the hair and nostril.

B The crossing detection method of eyes We propose a crossing searching method to detect the eyes.

First, the detection window scans the searching region horizontally to get the horizontal region of the candidate eye (Fig.4 b). After that, the other detection window scans vertically the horizontal region of the candidate eye to obtain the candidate eye regions. Let hsear is the height of searching region; hhwin and whwin are the height and width of the detection window Wh which moves horizontally through the searching region. As Wh is focus on the difference between the eye and brow integrated as whole and eye cheek, nose bridge, ear, then hhwin = hsear. As the width ratio of eye to face is about 1/5, so whwin =0.2 wface. Let thwin is the moving step of Wh, so thwin =0.25whwin.

Let hvwin and wvwin are the height and width of the detection window Wv which moves vertically through the horizontal region of the candidate eye. As Wv contains the whole eye, so wvwin = whwin and hvwin =0.4hhwin. Let tvwin is the moving step of Wv, tvwin =0.25hhwin. After vertical detection, brow is excluded and the candidate eye region is obtained.

When detecting, the size of the window region is normalized to the size of inputted samples in SVM classifier.

C Tthe multi-view eye training samples From the FERET, we select the images containing the faces

with naked eyes or with glasses at 0°, 22.5°, 45°, 67.5° and 90° poses. After rotating them 10° and 20° respectively, we get the total 1400 faces which can be detected by AdaBoost detector correctly. From them 240 faces are chosen to be cut eyes by hand as the initial training samples. The remaining samples are obtained by bootstrap method in the detection process, that is, the training samples are expanded through the eyes and non-eyes obtained from the falsely detected image or undetected image. Finally, 1050 positive samples and 2400 negative samples of horizontal window regions are obtained. The positive horizontal window contains eye and brow, and the negative one does not contain any eye and brow, or contains only a small part of eye. The size of these samples is normalized to 28×40. Parts of the horizontal window samples are shown in Fig.2. With the same approach, 1430 eye samples and 4600 non-eye samples are obtained; the size of them is normalized to 28×16. Parts of the eye and non-eye samples are shown in Fig.3.

(a) The positive samples of horizontal moving window

(b) The negative samples of horizontal moving window

Fig. 2. The horizontal window samples for training

(a) Eye samples

(b) Non-eye samples

Fig. 3. The eye and non-eye sample for training

D Sample preprocessing First, the samples are grayed and histogram equalization.

Then, the horizontal window sample with size 28×40 is transformed into 315-dimensional vector by the orthogonal wavelet, to get the aim of reducing dimensions and improving the training speed. Take account of the robustness to the low-resolution image, only the single-level wavelet decomposition is used to get the low-frequency component of sample. The samples of the eye and non-eye do not be decomposed.

Page 4: [IEEE 2010 2nd International Symposium on Information Engineering and Electronic Commerce (IEEC) - Ternopil, Ukraine (2010.07.23-2010.07.25)] 2010 2nd International Symposium on Information

E The implementation of SVM training and detection We select the quadratic polynomial as the kernel function

of SVM, and C=1.0. The classifier for the horizontal moving window and the classifier for eye are obtained by the training method in section III. Due to the number of negative sample is much more than the positive ones, the result of SVM classification tends to error reject instead of error acceptance, then the adjustment threshold is used for compensation in detection, thus, expression (2) is changed to:

=−=

l

iiii bxxKyxf

1),()( α (4)

−≥

=otherwise

xfx

,1)(,1

)(θ

ψ (5)

where f(x) is called the weight function in the classifier.

As the eyes just are located in the horizontal regions of candidate eye detected by , a relatively low threshold is set in expression (5) to ensure that the regions contains the real eyes. By experiment, =-0.18 for and =-0.44 for .

V FILTERING AND MERGING THE DETECTED REGION

A Filtering the detected regions According to the SVM theory, the sample which has higher

value in expression (4) tends to be farther to the separating surface. In order to make the region detected as close as possible to the real eye, we do not regard the window with =1 as the candidate region just like the traditional method, instead, we process the detection results as follows.

• Obtaining the horizontal regions of candidate eye

1) Sort the regions scanned by the horizontal moving window in descending order according their values in (4), and obtain the region sequence qh ={f1, f2, , fn}. Let SA and SB are the two horizontal regions of candidate eyes.

2) Select f1 into SA. If hor =1 for f1, continue to access qhand add no more than 2 regions which meet the following conditions into SA.

a) hor =1 for this region b) Let nt be the number of step which window moves

horizontally from this region to f1, then 0 | nt | 3. 3) Traverse qh and select the first region that does not

intersect with SA as SB. Let this region be fp. If hor=1 for fp,continue to access qh and add no more than 2 regions which meet the following conditions into SB .

a) hor =1 for this region b) Let nt be the number of step which window moves

horizontally from this region to fp, then 0<| nt | 3. • Obtaining the candidate eye regions

Scan the horizontal regions of the candidate eye by the vertical moving detection window. And in each horizontal region sort the window regions in descending order according to their values in (4). Traverse each sequence and select no

more than 3 regions whose =1 as the candidate eye regions (Fig.4 c).

B Process the overlapping eye windows and locate the eye Because the detection step is small, the adjacent window

regions have a high similarity. The same eye may be detected by several adjacent windows, which will result in multiple overlapping candidate eye regions. So we process them as the following methods.

Let seq={R1, R2,…,Rn} is the sequence of the candidate eye regions in SA or SB, where Rk=(x, y, w, h) is the rectangle; (x,y)is the coordinate of the upper-left corner on the rectangle, and w and h are the width and height of the rectangle. For Rk and Rt,let Dw = 0.5×Rt w, Dh=0.5×Rt h; the two rectangles overlaps each other, if they meet all the conditions: Rk x Rt x+ Dw,Rk x Rt x- Dw, Rk y Rt y+ Dh and Rk y Rt y- Dh.

After the overlapping detection, Let Seqop={E1, E2,…, Em}is the sequence of the groups which have overlapping rectangle windows in each SA or SB. The single rectangle window is also as a group in Seqop. Let Ei be in Seqop, and n is the number of rectangle windows in Ei, n 1; { if1 , if2 , , i

nf } is the windows

weight in Ei. Let if ==

n

j

ijf

n 1

1 is the average weight of Ei. We

select the groups which have the most overlapping windows to form the sequence Seq1, Seq1 ⊆ Seqop, From Seq1 we select the group with maximal if as the candidate eye group in SA or SB.And the average rectangle of the candidate eye group is regard as the last eye region. Fig.4. illustrates the whole process of detecting and filtering the eyes.

Fig. 4. A illustration for the process of detecting and filtering eyes: (a) the face and its eye searching regions, (b) the horizontal regions of candidate eye, (c) the candidate eye regions, (d) the candidate eye groups, (e) the eye regions

VI EXPERIMENTS AND EVALUATION

A Dataset We test the proposed method in FERET and LFW. The

LFW dataset contains more than 13,000 real-life face images with complex background, and the only constraint on these faces is that they can be detected by the Viola-Jones face detector. All the image size in the LFW is 250×250.

We select randomly the faces that are different from the training samples from the FERET. They contains the frontal faces, the profile faces with pose at 15°, 22.5°, 45°, 67.5°, 90°. The faces at every pose are 100. Then all these faces are rotated by ±10° and ±20° in plane. After removing the faces that can not be detected by AdaBoost detector, we obtain 1820 test samples.

Page 5: [IEEE 2010 2nd International Symposium on Information Engineering and Electronic Commerce (IEEC) - Ternopil, Ukraine (2010.07.23-2010.07.25)] 2010 2nd International Symposium on Information

We select 500 images from the LFW randomly, and these images contain the faces with different pose, expression, wearing glasses, opening or closing eyes, or low clarity.

B Evaluation protocol To evaluate the precision of eye localization, the relative

error measurement [6] is used as the localization criterion. Let dl and dr are the Euclidean distance between the automatic located left (right) eye position and the manual marked left (right) eye position; dlr is the Euclidean distance between the two manual marked eye positions. Then the relative error is defined as: err=max(dl,dr)/dlr; if err 0.20 the eye localization is considered as success.

C Experimental results and Evaluation Our test environment is: Intel P4 2.8GHz CPU, 2G

Memory, Windows XP operating system, VC.NET 2005 and OpenCV programming tools. The AdaBoost face detector is provided by OpenCV.

1) The experimenst on multi-view eye localization The table I shows the experimental results of the eye

localization algorithm presented in the literature [6] on the FERET. The table II shows the eye localization results of our method on the FERET. We can see that our method have the very high correct rate under the swing angle range [-22.5°, 22.5°] and the tilting angle range [-10°, 10°], just equivalent to the literature [6]. To the face with larger pose, our method also achieves better correct rate than literature [6].

TABLE I RESULTS PRESENTED IN THE LITERATURE [6] ON FERET

Swing Angle -45° -22.5° 0° 22.5° 45° Proportion(%)

(err<0.2) 79.6 98.1 100 97.4 80.2

TABLE II OUR RESULTS ON FERET (%) (ERR<0.2)

SA TA

0° 15° 22.5° 45° 67.5° up

0° 100 100 98.0 88.0 82.2 10° 97.4 97.2 94.1 86.2 79.5 20° 93 91.6 84.3 82.7 75.3

Note: “SA” refers to the swing angles that face rotates around the vertical axis; “TA” refers to the tilting angle that face rotates around the normal axis of view plane.

2) The robustness experiment on complex background image

The table III shows the results of our method on LFW. We can see that to the images with different poses, expressions and low clarity, our method achieves 93.6% of the correct rate under err<0.20. The experiment demonstrates that the locating speed of our method is about 0.12s for one image. Some experimental results on the test images are shown in Fig.5.

TABLE III OUR RESULTS ON LFW

Precision err<0.05 err<0.10 err<0.15 err<0.20 err<0.25 Proportion 24.2% 75.4% 88.0% 93.6% 97.2%

VII CONCLUSIONS

In order to solve the problem of locating eyes on multi-view face in complex background, we present a robust eye localization method, which makes full use of the experience knowledge about the features of eye and the advantage of SVM with large scale training data. The crossing detection method takes advantage of the holistic features of eye and brow to simplify the detection problem. The improved SVM algorithm based on the large scale multi-view eyes samples shorten the training time and enhance the classification accuracy. Finally, the filtering based on the weight in SVM classifier and the overlapping windows merging are used to locate the accurate eye region. The experimental result has proved that the method is accurate, robust and able to adapt to the eye localization with various face post and expression in the complex background.

REFERENCES

[1] Gengxin, Zhou Zhihua, Chen Shifu. Eye Location on Hybrid Projection Function [J]. Journal of Software, Beijing, vol. 14, pp. 1000-9825, August 2003. (in Chinese)

[2] Shi Huirong, Zhang Xueshuai, Liangyan, Zhang Hongcai, Cheng Yongmei. A Fast and More Accurate Template Matching Eye Location Method Based on Fuzzy Classification [J]. Journal of Northwestern Polytechnical University, Xi’an, vol. 23, pp. 55-59, January 2005. (in Chinese)

[3] Kawaguchi T, Hikada D, Rizon M. Detection of the eyes from human faces by Hough transform and separability filter [C]. In Proceedings of IEEE International Conference on Image Processing. Vancouver, BC: IEEE Computer Society, 2000, pp. 49-52.

[4] Hu Tao, Wang Jiale. Eye detection based on SVM[J]. Computer Engineering and Application, Beijing, vol. 44, pp. 188-190, August, 2008. (in Chinese)

[5] Tang Xusheng, Ou Zongying, Su Tieming, Zhao Pengfei. Fast Locating Human Eyes in Complex Background[J]. Journal of Computer Aided Design & Computer Graphics, Beijing, vol. 18, pp. 1535-1540, October 2006. (in Chinese).

[6] Zhang Wencon, Li Xin, Yao Peng, Li Bin, Zhuang Zhenquan. A Robust Eye Localization Algorithm for Face Recognition[J]. Chinese Journal of Electronics, Beijing, vol. 25, pp. 337-342, March 2008. (in English).

[7] Paul Viola and Michael J. Jones. Rapid object detection using a boosted cascade of simple features [C]. In: Proceeding of IEEE Conference on Computer Vision and Pattern Recognition, Kauai, Hawaii, USA. 2001.

[8] http://vis-www.cs.umass.edu/lfw/ [9] Osuna E, Freund R, Girosi F. An improved training algorithm for

support vector machine [C]. Proceedings of the 1997 IEEE Workshop on Neural Networks for Signal Processing. New York: IEEE Press, 1997, pp. 276-285.

[10] Xiao Rong, Wang Jicheng, Sun Zhengxing, Zhang Fuyan. An Approach to Incremental SVM Learning Algorithm[J]. Journal of Nanjing University (Natural Sciences), Nanjing, vol. 38, pp. 152-157, February 2002. (in Chinese).

Fig. 5. Some experimental results on the test images