biometric recognition in spatial frequency domain · recognition --- correlation filters face...

Post on 25-Jun-2018

236 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

1

Vijayakumar Bhagavatula

Vijayakumar Bhagavatula

Biometric Recognition in Spatial Frequency Domain

2

Vijayakumar Bhagavatula

Acknowledgments

Dr. Marios SavvidesDr. Chunyan XieDr. Jason ThorntonDr. Krithika VenkataramaniPablo HenningsJingu HeoSungwon ParkRamzi AbiantonRamamurthy Bhagavatula

Technology Support Working Group (TSWG)CyLab

3

Vijayakumar Bhagavatula

Outline

Use of spatial frequency domain for image recognition --- Correlation filtersFace recognitionIris recognitionFingerprint recognitionFace tracking examplesSummary

5

Vijayakumar Bhagavatula

Challenge: Pattern VariabilityChallenge: To tolerate pattern variability (some times called distortions) while maintaining discriminationFacial appearance changes due to illuminationFingerprint image changes due to plastic deformation

6

Vijayakumar Bhagavatula

Pattern Recognition ApproachesFeatureComputation

Feature Extraction

Classification

PriorKnowledge

Transform-ations

Statistics/Training Data

INPUT

CLASS

Statistical pattern recognition (e.g., Minimum error rate methods)Nonparametric methods (e.g., Nearest-neighbor methods)Discriminant methods (e.g., linear discriminant functions, artificial neural networks, support vector machines, etc.)Correlation filters

7

Vijayakumar Bhagavatula

Correlation Pattern Recognition

Determine the cross-correlation between the reference and test images for all possible shifts.

When the target image matches the reference image exactly, output is the autocorrelation of the reference image; autocorrelation peaks at origin.

If the input r(x) contains a shifted version s(x-x0) of the reference signal, the correlator will exhibit a peak at x=x0.

If the input does not contain the reference signal s(x), the correlator output will be low .

If the input contains multiple replicas of the reference signal, resulting cross-correlation contains multiple peaks at locations corresponding to input positions.

( ) ( ) ( )c r x s x dxτ τ= −∫

Ref: B.V.K. Vijaya Kumar, A. Mahalanobis and R. Juday, Correlation Pattern Recognition, Cambridge University Press, UK, November 2005.

8

Vijayakumar Bhagavatula

Shift-InvarianceDesired pattern can be anywhere in the input scene.Multiple patterns can appear in the scene.Simple matched filters unacceptably sensitive to rotations, scale changes, etc.

9

Vijayakumar Bhagavatula

Correlation Plane Contour Map Correlation Plane Contour Map

Correlation Plane SurfaceCorrelation Plane Surface

M1A1 in the open M1A1 near tree line

SAIP ATR SDF Correlation Performance for Extended Operating

Conditions

Courtesy: Northrop Grumman

Adjacent trees cause some correlation noise

10

Vijayakumar Bhagavatula

Correlation Filters

MatchNo Match

DecisionTest Image IFFT Analyze

Correlation output

FFT

Correlation Filter

Filter Design . . .Training Images

TrainingRecognition

11

Vijayakumar Bhagavatula

Peak to Sidelobe Ratio (PSR)

σmeanPeakPSR −

=

1. Locate peak1. Locate peak

2. Mask a small 2. Mask a small pixel regionpixel region

3. Compute the mean and 3. Compute the mean and in a in a bigger region centered at the peakbigger region centered at the peak

PSR invariant to constant illumination changes

Match declared when PSR is large, i.e., peak must not only be large, but sidelobes must be small.

12

Vijayakumar Bhagavatula

CMU PIE Database

13 cameras21 Flashes

Ref: T. Sim, S. Baker, and M. Bsat, “The CMU pose, illumination, and expression (PIE) database,” Proc. of the 5th IEEE Intl. Conf. on Automatic Face and Gesture Recognition, May 2002.

13

Vijayakumar Bhagavatula

PIE Database, one face under 21 illuminations65 subjects

14

Vijayakumar Bhagavatula

Train on 3, 7, 16, Train on 3, 7, 16, --> Test on 10.> Test on 10.

Match Quality = 40.95

15

Vijayakumar Bhagavatula

Using the same filter as before, Match Quality = 30.60

Occlusion of Eyes

16

Vijayakumar Bhagavatula

Match Quality = 22.38

Uncentered Images

17

Vijayakumar Bhagavatula

ImpostorUsing someone else’s filterPSR = 4.77

18

Vijayakumar Bhagavatula

Filter h

The center of correlation output

An image in the frequency domain in vector form xi

N images

Distortion-tolerant Correlation Filter Design

xd×d Xd×d

… …

xi

FFT

2 1 2[ , ,..., ]Nd N×=X x x x

……

hd2×1 d2×1 Hd×d

(0,0)i i ic + += =h x x h

19

Vijayakumar Bhagavatula

Minimum Average Correlation Energy Filter

Minimizing average correlation energy can be done directly in the frequency domain by averaging the correlation plane energies Ei as follows.

hDhhXXh iii

d

kid

d

pidi kXkHpcE ++

==

==== ∑∑ *

1

221

1

21 )()()(

⎥⎥⎥⎥⎥⎥

⎢⎢⎢⎢⎢⎢

)(.

)3()2(

)1(

dX

XX

X

i

i

i

i

DhhhXXh +

=

+

=

=⎥⎥⎦

⎢⎢⎣

⎡== ∑∑

N

iiiN

N

iiNave EE

1

*1

1

1

h = D-1 X (X+ D-1 X)-1 c

Minimizing average correlation energy h+Dh subject to the constraints X+h=c leads to the MACE filter solution

Ref: A. Mahalanobis, B.V.K. Vijaya Kumar, and D. Casasent, “Minimum average correlation energy filters,” Appl. Opt. 26, pp. 3633-3630, 1987.

20

Vijayakumar Bhagavatula

2D Impulse Response of MACE Filter

22

Vijayakumar Bhagavatula

Features of Correlation Filters

Shift-invariant; no need for centering the test imageGraceful degradationCan handle multiple appearances of the reference image in the test imageClosed-form solutions based on well-defined metrics

Ref: B.V.K. Vijaya Kumar, A. Mahalanobis and Richard D. Juday, Correlation Pattern Recognition,Cambridge University Press, UK, November 2005.

23

Vijayakumar Bhagavatula

PCA in Frequency DomainIs there any advantage to carrying out PCA on the Fourier transforms of training images?Since FT is a unitary transformation, apparently no differences (except for sign changes) in the eigenfaces obtained via space domain or frequency domain.

24

Vijayakumar Bhagavatula

Phase retains more image information

27

Vijayakumar Bhagavatula

EigenphasesPCA carried out on the phase-only versions of training image Fourier transforms

Frequency DomainFrequency Domain

Space DomainSpace Domain

28

Vijayakumar Bhagavatula

Experiments

5-10,18,19,208

18,19,20153,16,207

8,9,10143,10,166

6,7,8131,2,7,165

7,10,19124,7,134

5,7,9,10112,7,163

5-10101,10,162

5-1293,7,161

29

Vijayakumar Bhagavatula

Test Images

Training Images used were full imagesTraining Images used were full images

30

Vijayakumar Bhagavatula

Recognition Rates (Full Test Images)

Ref: M. Savvides, and B.V.K. Vijaya Kumar, “Eigenphases Vs. Eigenfaces,” Intl. Conf. on Pattern Recognition (ICPR), Cambridge, England, 2004.

31

Vijayakumar Bhagavatula

Recognition of Left-half Blocked Test Images

32

Vijayakumar Bhagavatula

Recognition of Test Images with Eye Region

33

Vijayakumar Bhagavatula

Face Recognition Grand Challenge (FRGC)

Ver 2.0625 Subjects; 50,000 Recordings; 70 Gbytes

To facilitate the advancement of face recognition research, FRGChas been organized by NIST

* P. J. Phillips, P. J. Flynn, T. Scruggs, K. W. Bowyer, J. Chang, K. Hoffman, J. Marques, J. Min, and W. Worek, "Overview of the Face Recognition Grand Challenge," In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2005

34

Vijayakumar Bhagavatula

FRGC Dataset: Experiment 4

Generic Training Set consisting of 222 people with a total of 12,776 images

Gallery Set of 466 people (16,028) images total

Feature extraction Feature space generation

Reduced Dimensionality Feature Representation of Gallery Set

16,028

Probe Set of 466 people (8,014) images total

Reduced Dimensionality Feature Representation of Probe Set

8,014

Similarity Matching

Reduced Dimensional Feature Space

project project

35

Vijayakumar Bhagavatula

FRGC “Gallery” Images

Controlled (Indoor) Controlled (Indoor)

16,028 gallery images of 466 people

36

Vijayakumar Bhagavatula

FRGC “Probe” Images

Uncontrolled (Indoor)

37

Vijayakumar Bhagavatula

FRGC “Probe” Images

Uncontrolled (Outdoor)

Outdoor illumination images are very

challenging due to harsh cast shadows

38

Vijayakumar Bhagavatula

FRGC Baseline Results

The verification rate of PCA is about 12% at False Accept Rate 0.1%.

ROC curve from P. Jonathan Phillips et al (CVPR 2005)

39

Vijayakumar Bhagavatula

Correlation Filters for Face Verification

Enrollment

Similarity Score

Verification

Only 1

Low performance

256 million correlations

Long time

40

Vijayakumar Bhagavatula

Class-dependence Feature Analysis (CFA)

MotivationImprove the recognition rate by using the generic training setReduce the processing time by extracting features using inner products

Class-dependence Feature AnalysisModel a population for recognition using a set of people.

41

Vijayakumar Bhagavatula

CFA

Extract features using correlation filtersRedundant class-dependence feature analysis (CFA)

– Train a correlation filter for each subject from the generic training set

– Extract a feature vector for each image by inner product

– Calculate the similarity score

42

Vijayakumar Bhagavatula

Class Dependent Feature Analysis (CFA)

2y

ymace-2

T hy

Class1

….

-mace-2h

Class2 Class 222

1y 3y

uX)DX(XDh 111mace

−−+−=

⎥⎥⎥⎥

⎢⎢⎢⎢

=

n

2

1

u

uu

u

T0] 0 0 0[u1 =T1111][u2 = T0] 0 0 0[u222 =

In this figure, we’re building the MACE filter for class 2

43

Vijayakumar Bhagavatula

CFA - 2

CFA basis vectors

-1maceh 2-maceh 3-maceh 222-maceh

]...[x]h...hh[xy 22221k222-mace2-mace1-macekk cccH TT ===kx

TN21 uuuu ],...,,[=

T]N11 1 .. 1 1 [1u =hmace-1 : all zeros except u1

……… T]N222222 1 .. 1 1 [1u =hmace-222 : all zeros except u222

Test input with no trained filters

uX)DX(XDh 111mace

−−+−=

]...[),(...),(,([ 22221 cccKKK == k222-macek2-macek-1macek xhxh ),xhy

Linear CFA

Nonlinear CFA using Kernels

44

Vijayakumar Bhagavatula

KCFA Timeline

0

0 . 2

0 . 4

0 . 6

0 . 8

1

Ex p 4

Verif

icat

ion

Rat

es

P CAGS LDA

CFAKCFA- v 1KCFA- v 3

KCFA- v 5

82.4% @ 0.1 % FAR

(Latest Performance)

57

Vijayakumar Bhagavatula

Iris Pattern

Detail from biological tissue:

Muscle ligaments

Connective tissue of stromal layer

Anterior folds, furrows, crypts

Random, detailed arrangement of tissue

58

Vijayakumar Bhagavatula

Iris Segmentation

Eye image Bounded iris region

“Unwrapped” iris pattern

Standard iris segmentation: commonly used, proposed by Daugman

1 J.G. Daugman, “High Confidence Visual Recognition of Persons by a Test of Statistical Independence,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, no. 11, pp. 1148-61, Nov. 1993.

1

59

Vijayakumar Bhagavatula

Iris Pattern “Unwrapping”

60

Vijayakumar Bhagavatula

Iris Recognition: Correlation FiltersWe use correlation filters for iris recognition. We design a filter for each iris class using a set of training images.

match

no match

FFT-1x

Correlation filter

FFT

Determining an iris match with a correlation filter

Segmented iris pattern

64

Vijayakumar Bhagavatula

Iris Pattern Deformation

Video Example

Landmark points for all images within one class

Clear deformation from:

Tissue changes AND/OR

Deviations in iris boundaries.

65

Vijayakumar Bhagavatula

Eyelid Occlusion

Example : Eyelid artifacts in segmented pattern.

upper eyelid

lower eyelid

lower eyelid

Example: match comparison

For significant portion of area, similarity is lost.

66

Vijayakumar Bhagavatula

ApproachProblem summary

Accurate pattern matching when patterns experience

relative nonlinear deformations

and partial occlusions

in addition to blurring and observation noise.

PATTERN SAMPLE

PATTERN TEMPLATE

GENERATE EVIDENCE

ESTIMATE STATES

MATCH SCORE

ApproachPROBABILISTIC MODEL:

DEFORM & OCCLUSION STATES

67

Vijayakumar Bhagavatula

Probabilistic Method

Model :

Set of observed variables O (similarity cues, eyelid cues).

Set of hidden variables H (relative transformation states).

Given : Pair of iris patterns to compare.

Estimate : Conditional distribution

Prior probability

Rewards reasonable transformations

Observation Likelihood

Likelihood of observed data, given transformation

68

Vijayakumar Bhagavatula

Hidden Variables : Deformation

Iris plane partitioned into 2D field:

Deformation described by vector field:

69

Vijayakumar Bhagavatula

Hidden Variables : Occlusion

Occlusion described by binary field:

Hidden vars:

Field resolution tradeoff

> representative power

> computation

70

Vijayakumar Bhagavatula

Deformation Evidence

Evidence about local similarities between template and image:

Correlation filter output

Produces similarity cues across local alignments.

Robust to pattern corruption by noise.

71

Vijayakumar Bhagavatula

Eyelid Evidence

Classifier : Fisher Linear Discriminant in statistical feature space

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.80

1

2

3

4x 10-3

Marginal separation, but enough for useful cues.

non-occluded

occluded

Example classifier output

This yields local occlusion cues .

72

Vijayakumar Bhagavatula

Probabilistic Model Dependencies

Local observed variables: Local hidden variables:

73

Vijayakumar Bhagavatula

Goal : Infer posterior distribution on hidden states:

Matching Process

Template

New pattern

Similarity evidence

Eyelid evidence

Inference technique :Loopy belief propagation

74

Vijayakumar Bhagavatula

After Inference : Match Score Computation

Deterministic match score

Average similarity across unoccluded iris regions:

Better : Probabilistic match score

Expected score with respect to estimated posterior:

Advantage : accounts for uncertainty in the posterior.

75

Vijayakumar Bhagavatula

Example : Match Comparisons

Template

Authentic

Match score:

0.28

Impostor

Match score:

0.01

77

Vijayakumar Bhagavatula

78

Vijayakumar Bhagavatula

ICE Phase I : Performance

Verification Rate at FAR = 0.1%

Experiment 1: 99.63 % Experiment 2: 99.04 %

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.80

0.02

0.04

0.06

0.08

0.1

0.12

0.14

non-match scores

match scores

Experiment 1 score distribution

84

Vijayakumar Bhagavatula

86

Vijayakumar Bhagavatula

NIST 24 DatabaseDigital Video of Live-scan Fingerprint Data Optical sensor DFR-90 from Identicator technology of 500 dpi resolutionChosen data set - plastic distortion set

The finger is rolled and twisted10 fingers of 10 people ( 5 female and 5 male )10 secs of MPEG2 movie per finger

300 images of size 448x478 pixels (padded to 512x512 pixels)Chosen subset - 10 thumb prints

Class 3 Class 10

87

Vijayakumar Bhagavatula

Evaluation ProtocolTraining - uniformly sampled images from the 300 images of a classTest - correlate filters of each class against 300 images of all classes

Images of the same class as the filter – 300 authentics per filterImages of a different class from the filter – 2700 impostors per filte

88

Vijayakumar Bhagavatula

Resolution Effect

128x128 64x64256x256512x512 32x32

K. Venkataramani, B.V.K. Vijaya Kumar, “Fingerprint verification using correlation filters”, Audio-and Video- based Biometric Person Authentication (AVBPA), UK, 2003 .

0.6%

89

Vijayakumar Bhagavatula

Palmprints

Palmprints have a conglomerate of features.These include principal lines, smaller creases or wrinkles, fingerprint-like ridges and textures.Palmprints can be easily aligned about fiducial points of the hand’s geometry or shape.

90

Vijayakumar Bhagavatula

Palmprint Verification

Design ProcessDesign Process

Training ImagesTraining Images

CorrelationCorrelationFilterFilter

FFTFFT IFFTIFFT VerificationVerification

AuthenticAuthentic ImposterImposter

91

Vijayakumar Bhagavatula

Experiment SpecificationsPolyU Palmprint Database100 palms (classes)Left hands flipped to look as right hands3 images per class for training3 images per class for testing5 different experiments using region sizes with sides of 64, 80, 96, 112, and 128 pixels

Left handLeft hand Left hand flippedLeft hand flipped Right handRight handDatasetDataset

usedused

92

Vijayakumar Bhagavatula

Palmprint Verification Results

104

Vijayakumar Bhagavatula

SummaryCorrelation filters

Achieved excellent performance in face recognition grand challenge (FRGC)Performed very well in iris challenge evaluation (ICE)Also successful in fingerprint recognition and palmprintrecognition

Correlation filters provide a single matching engine for a variety of image biometrics --- making multi-biometric approaches feasible.

top related