face recognition and its applications based on works of: jinshan tang; ariel p from hebrew...

48
Face Recognition and Its applications Based on works of: Jinshan Tang; Ariel P from Hebrew University; Mircea Focşa, UMFT; Xiaozhen Niu, Department of Computing Science, University of Alberta; Christine Podilchuk, [email protected] , http://www.caip.rutgers.edu/wiselab PART 1

Post on 20-Dec-2015

218 views

Category:

Documents


0 download

TRANSCRIPT

Face Recognition and Its applications

Based on works of: Jinshan Tang; Ariel P from Hebrew University; Mircea Focşa, UMFT; Xiaozhen Niu,Department of Computing Science, University of Alberta; Christine Podilchuk, [email protected], http://www.caip.rutgers.edu/wiselab

PART 1

ContentsIntroductionFace detection using color informationFace matchingFace Segmentation/DetectionFacial Feature extractionFace RecognitionVideo-based Face RecognitionComparisonConclusionReference

Face Segmentation/Detection

During the past ten years, considerable progress has been made in multi-face recognition area, This includes: Example-based learning approach by Sung and

Poggio (1994). The neural network approach by Rowley et al. (1998). Support vector machine (SVM) by Osuna et al. (1997).

Introduction

Input face image

Face detection

Face feature extraction

Feature Matching Decision maker

Output result

Basic steps for face recognition

Face database

Face recognition

Face detection• Geometric information based face detection

• Color information based face detection

• Combining them together

(a) Geometric information based face detection

(b) Color information based face detection

Color information based face detection

Face color is different from backgroundChoice of color spaces is very importantColor Spaces:

•R,G,B

•YCbCr

•YUV

•r,g

•……..

Skin color

Background color

Figure 4. Skin color distribution in a complex background

A face detection algorithm Using Color and Geometric information

Ideas: (1) compensate for lightning, (2) separate by transforming to new (sub) space.

Ideas: (1) compensate for lightning, (2) separate by transforming to new (sub) space.

(3) clustering.

Color can be used in segmentation and grouping of image subareas.

Feature-based face detection

Location and shape parameters of eyes are the most important features to be detected through segmentation and morphological operations (dilation and erosion).

Ideas:

1) Eyes

2) Mouth

3) Boundary (edge detection)

4) Boundary approximated to ellipse or something (Hough)

The concept of eye glasses

The concept of half-profiles

Face Matching•Feature based face matching

•Template matching

Features versus templates

•Feature based face matching

Face image From face detection

Normalization Feature extraction

Feature vector

classifierDecision maker

Output results

You can extract various features

You can use various classifiers

You can use various decision makers

Normalization

)()(

))(()()y(

TI

TImeanTImeanC

T

TN

Eye location Normalization: rotation

normalization, scale normalization

Cross Correlation :

object template

Averaged for objects

Feature extraction•Eyebrow thickness and vertical position at the eye center position

•A coarse description of the left eyebrow’s arches

•Nose vertical position and width

•Mouth vertical position, width, height upper and lower lips

• eleven radii describing the chin shape

•Bigonial breadth (face width at nose position)

•Zygomatic breadth (face width halfway between nose tip and eyes).

3.5-D feature vector

Example of some geometrical features

Classifier

1)()()( j

Tjj mxmxx

Bayes classifier

Feature vector

Computer )(xj

x

(j=2,3,…N)j

m

Rank the distance values

)(xj

Output the results

This is just one example of classifier, others are Decision Trees, expressions, decomposed structures, NNs.

ANN Classifier

ANN

one-class-in-one network

multi-class-in-one network

Feature vector

Class 1 Class 2

MAXNET

Classification results

Fig.2. one-class-in-one network

Template matching Produce a template

Face image From face detection

Normalization

Decision maker

Output results

matching

Templatesdatabase

You have to create the data base of templates for all people you want to recognize

There are different templates used in various regions of the normalized face.

Various methods can be used to compress information for each template.

Example-based learning approach (EBL)

Three parts:The image is divided into many possible-overlapping windows, each window pattern gets classified as either “a

face” or “not a face” based on a set of local image measurements.

For each new pattern to be classified, the system computes a set of different measurements between the new pattern and the canonical face model.A trained classifier identifies the new pattern as “a face” or “not a face”.

Example of a system using EBL

Neural network (NN)

Kanade et al. first proposed an NN-based approach in 1996.Although NN have received significant attention in many research areas, few applications were successful in face recognition.

Why?

Neural network (NN)

It’s easy to train a neural network with samples which contain faces, but it is much harder to train a neural network with samples which do not.The number of “non-face” samples are just too large.

Neural network (NN)

Neural network-based filter. A small filter window is used to scan through

all portions of the image, and to detect whether a face exists in each

window.

Merging overlapping detections and arbitration. By setting a small threshold, many false detections can be eliminated.

An example of using NN

Test results of using NN

SVM (Support Vector Machine)

SVM was first proposed in 1997, it can be viewed as a way to train polynomial neural network or radial basic function classifiers.Can improve the accuracy and reduce the computation.

Comparison with Example Based Learning (EBL)

Test results reported in 1997.Using two test sets (155 faces). SVM achieved better detection rate and

fewer false alarms.

Recent approaches

Face segmentation/detection research area still remain active, for example: An integrated SVM approach to multi-face

detection and recognition was proposed in 2000.

A technique of background learning was proposed in August 2002.

Still lots of potential!

Static face recognition

Numerous face recognition methods/algorithms have been proposed in last 20 years, several representative approaches are:

Eigenface LDA/FDA (Linear DA, Fisher DA) Discriminant analysis

(algorithm) Neural network (NN) PCA – Principal Component Analysis Discrete Hidden Markov Models (DHMM) Continuous Density HMM (CDHMM).

EigenfaceThe basic steps are:

Registration. A face in an input image first must be located and registered in a standard-size frame.Eigenpresentation. Every face in the database can be represented

as a vector of weights, the principal component analysis (PCA) is

used to encode face images and capture face features.

Identification. This part is done by locating the images in the database whose weights are the closest (in Euclidean distance) to the weights of the test images.

LDA/FDAFace recognition method using LDA/FDA is called the fishface method.Eigenface use linear PCA. It is not optimal to discrimination for one face class from others.Fishface method seeks to find a linear transformation to maximize the between-class scatter and minimize the within-class scatter.Test results demonstrated LDA/FDA is better than eigenface using linear PCA (1997).

Test results of LDATest results of a subspace LDA-based face recognition method in 1999.

Video-based Face Recognition

Three challenges: Low quality Small images Characteristics of face/human objects.

Three advantages: Allows much more information. Tracking of face image. Provides continuity,

this allows reuse of classification information from high-quality images in processing low-quality images from a video sequence.

Basic steps for video-based face recognition

Object segmentation/detection.Motion structure. The goal of this step is to estimate the 3D

depths of points from the image sequence.

3D models for faces. Using a 3D model to match frontal views of

the face.

Non-rigid motion analysis.

Recent approachesMost video-based face recognition system has three modules for

detection, tracking and recognition.

An access control system using Radial Basis Function (RBS) network was proposed in 1997.

A generic approach based on posterior estimation using sequential Monte Carlo methods was proposed in 2000.

A scheme based on streaming face recognition (SFR) was propose in August 2002.

The Streaming Face Recognition (SFR) schemeCombine several decision rules together, such as: Discrete Hidden Markov Models (DHMM) and Continuous Density HMM (CDHMM).

The test result achieved a 99% correct recognition rate in the intelligent room.

ComparisonTwo most representative and important protocols for face recognition evaluations: The FERET protocol (1994).

Consists of 14,126 images of 1199 individuals. Three evaluation tests had been administered in

1994, 1996, and 1997. The XM2VTS protocol (1999).

Expansion of previous M2VTS program (5 shots of each of 37 subjects).

Now consists 295 subjects. The results of M2VTS/XM2VTS can be used in wide

range of applications.

1996/1997 FERET EvaluationsCompared ten algorithms.

• Face recognition has many potential applications.• For many years not very successful,

• we need to improve the accuracy of face recognition

• Combining face recognition and other biometric recognition technologies,

•Such as:• fingerprint recognition technology, • voice recognition technologies • and so on

Conclusion

For our applications accuracy is much more important than speed.

Significant achievements have been made recently. LDA-based methods and NN-based methods are

very successful.

FERET and XM2VTS have had a significant impact to the developing of face recognition algorithms.Challenges still exist, such as pose changing and illumination changing. Face recognition area will remain active for a long time.

Conclusion

Reference[1] W. Zhao, R. Chellappa, A. Rosenfeld, and P.J. Phillips, Face Recognition: A Literature Survey, UMD CFAR Technical Report CAR-TR-948, 2000.[2] K. Sung and T. Poggio, Example-based Learning for View-based Human Face Detection, A.I. Memo 1521, MIT A.I. Laboratory, 1994.[3] H.A. Rowley, S. Baluja, and T. Kanade, Neural Network Based Face Detection, IEEE Trans. On Pattern Analysis and Machine Intelligence, Vol. 20, 1998.[4] E. Osuna, R. Freund, and F. Girosi, Training Support Vector Machines: An Application to Face Recognition, in IEEE Conference on Computer Vision and Pattern Recognition, pp. 130-136, 1997.[5] M. Turk and A. Pentland, Eigenfaces for Recognition, Journal of Cognitive Neuroscience, Vol.3, pp. 72-86, 1991.

[6] W. Zhao, Robust Image Based 3D Face Recognition, PhD thesis, University of Maryland, 1999.[7] K.S. Huang and M.M. Trivedi, Streaming Face Recognition using Multicamera Video Arrays, 16th International Conference on Pattern Recognition (ICPR). August 11-15, 2002.[8] P.J. Phillips, P. Rauss, and S. Der, FERET (Face Recognition Technology) Recognition Algorithm Development and Test Report, Technical Report ARL-TR 995, U.S. Army Research Laboratory.[9] K. Messer, J. Matas, J. Kittler, J. Luettin, and G. Maitre, XM2VTSDB: The Extended M2VTS Database, in Proceedings, International Conference on Audio and Video-based Person Authentication, pp. 72-77, 1999.