© 2003 by davi geigercomputer vision september 2003 l1.1 face recognition recognized person face...

22
Computer Vision September 2003 L1.1 © 2003 by Davi Geiger Face Recognition Recognized Person Face Recognition

Post on 19-Dec-2015

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: © 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition

Computer Vision September 2003 L1.1© 2003 by Davi Geiger

Face Recognition

Recognized Person

Face Recognition

Page 2: © 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition

Computer Vision September 2003 L1.2© 2003 by Davi Geiger

Recognized Person

Face Recognition

Query Image

Face Recognition• Definition:

– Given a database of labeled facial images: Recognize an individual from an image formed from new and varying conditions (pose, expression, lighting etc.)

• Sub-Problems:

– Representation:

• How do we represent images of faces?

• What information do we store?

– Classification:

• How do we compare stored information to a new sample?

– Search

Page 3: © 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition

Computer Vision September 2003 L1.3© 2003 by Davi Geiger

Representation Shape Representation:

• Generalized cylinders, Superquadrics …

Appearance Based Representation

• Refers to the recognition of 3D objects from ordinary images.

• PCA – Eigenfaces, EigenImages

• Fisher Linear Discriminant

Page 4: © 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition

Computer Vision September 2003 L1.4© 2003 by Davi Geiger

Image Representation

klkl

l

l

ii

i

iii

I

1)1(

1

21

.

.

...

....

kli

i

i

2

1

i

1

0

0

0

0

1

0

0

0

1

21

kliii

kli

i

i

2

1

10

10

001

Basis Matrix

vector of coefficients

• An image I is a point in dimensional space1klIR

lkIRI pixel 1

pix

el k

l

pixel 2

2550

255

255

1 klIRi..

..

.........

...

. ...... .

....

Page 5: © 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition

Computer Vision September 2003 L1.5© 2003 by Davi Geiger

Toy Example-Dimensionality Reduction

• Consider a set of images of 1person under fixed viewpoint & N lighting condition Each image is made up of 3 pixels and pixel 1 has the same value as pixel 3 for all images

pixel 1

pixel 3

pixe

l 2

...

..

........

...

. ...... .

...

Nn1 and .s.t 31

3

2

1

nn

n

n

n

n ii

i

i

i

i

1

0

0

0

1

0

0

0

1

321 nnnn iiii

0

1

0

1

0

1

21 nn ii n

ni

ni

Bc

2

1

01

10

01

.

11

01

10

01

ci

D, data matrix

NN ccciii 2121

01

10

01

D, data matrix

C, coefficient matrix

DBC 1CBD

new

new

new

newnewiii

3

2

11

010

505 ..iBc

Page 6: © 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition

Computer Vision September 2003 L1.6© 2003 by Davi Geiger

Idea: Eigenimages and PCA

pixel 1

pix

el k

l

pixel 2

2550

255

255 ..

..

.........

...

. ...... .

....• Eigenimages are eigenvectors of image

ensemble

• The Principle Behind Principal Component Analysis(1) (also called the “Hotteling Transform”(2) or the “Karhunen-Loeve Method”(3).)

Find an orthogonal coordinate system such that the correlation between different axis is

minimized.• Eigenvectors are typically computed using the

Singular Value Decomposition (SVD)

(1) I.T.Jolliffe; Principle Component Analysis; 1986(2) R.C.Gonzalas, P.A.Wintz; Digital Image Processing; 1987(3) K.Karhunen; Uber Lineare Methoden in der Wahrscheinlichkeits Rechnug; 1946

M.M.Loeve; Probability Theory; 1955

Page 7: © 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition

Computer Vision September 2003 L1.8© 2003 by Davi Geiger

...

..

.........

...

. ...... .

...

PCA-Dimensionality Reduction• Consider the same set of images

• PCA chooses axis in the direction of highest variability of the data, maximum scatter

pixel 1

pixel 3

pixe

l 2

1st axis

2nd axis

Nn1 and .s.t 31321 nnT

nnnn iiiiii

• Each image is now represented by a vector of coefficients in a reduced dimensionality space.

ninc

|||

ccc

|||

Biii NN 2121

|||

|||

data matrix, D

D) of (svd TUSVD UB set

dentitythat such I BBBSB TT

TE

• B minimizes the following energy function

Page 8: © 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition

Computer Vision© 2003 by Davi Geiger

The Covariance Matrix

• Define the covariance (scatter) matrix of the input samples:

(where is the sample mean)

N

nnnT

1

T)Cov( μ)μ)(i(iSD

μi

μi

μi

μiμiμiS

N

NT

2

1

21

T))(( μDμDS T

Page 9: © 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition

Computer Vision© 2003 by Davi Geiger

PCA: Some Properties of the Covariance/Scatter Matrix

• The matrix ST is symmetric

• The diagonal contains the variance of each parameter

(i.e. element ST,ii is the variance in the i’th direction).

• Each element ST,ij is the co-variance between the two directions i and j, represents the level of correlation

(i.e. a value of zero indicates that the two dimensions are uncorrelated).

Page 10: © 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition

Computer Vision© 2003 by Davi Geiger

PCA: Goal Revisited• Look for: - B• Such that:

– [c1 … cN] = BT [i1 … iN] ...

and the correlation is minimized

OR

Cov(C) is diagonal Note that Cov(C) can be expressed via Cov(D) and B as :

CCT = BT (D)(D)T B

Page 11: © 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition

Computer Vision© 2003 by Davi Geiger

Selecting the Optimal B

How do we find such B ?

DDTbiibi

Bopt contains the eigenvectors of the covariance of D

Bopt = [b1|…|bd]

BBS T

Page 12: © 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition

Computer Vision September 2003 L1.13© 2003 by Davi Geiger

SVD of a Matrix

D) of (svd TVUD

UB set ) of (svd TTT SUUDD 2

TT DDS

UB set

Page 13: © 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition

Computer Vision© 2003 by Davi Geiger

Data Reduction: Theory

• Each eigenvalue represents the the total variance in its dimension.

• Throwing away the least significant eigenvectors in Bopt means throwing away the least significant variance information !

Page 14: © 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition

Computer Vision September 2003 L1.15© 2003 by Davi Geiger

PCA for Recognition-Eigenfaces

• Consider the same set of images

• PCA chooses axis in the direction of highest variability of the data

• Given a new image, , compute the vector of coefficients associated with the new basis

Tnew

Tnew BBiBc 1

Nn1 and .s.t 31321 nnT

nnnn iiiiii

pixel 1

pixel 3

pixe

l 2

1st axis

2nd axis

....

..

........

...

. ...... .

...

newi

• Next, compare a reduced dimensionality representation of against all coefficient vectors

•One possible classifier: nearest-neighbor classifier

newcnewi

Nnn 1 c

© 2002 by M. Alex O. Vasilescu

Page 15: © 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition

Computer Vision September 2003 L1.16© 2003 by Davi Geiger

Data and Eigenfaces

• Each image below is a column vector in the basis matrix B

• Data is composed of 28 faces photographed under same lighting and viewing conditions

© 2002 by M. Alex O. Vasilescu

Page 16: © 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition

Computer Vision September 2003 L1.17© 2003 by Davi Geiger

PCA for Recognition - EigenImages• Consider a set of images of 2 people under fixed viewpoint & N lighting condition

• Each image is made up of 2 pixels

1st axis

2nd axis

1st axis

2nd axis

• Reduce dimensionality by throwing away the axis along which the data varies the least

• The coefficient vector associated with the 1st basis vector is used for classifiction

• Possible classifier: Mahalanobis distance

• Each image is represented by one coefficient vector

• Each person is displayed in N images and therefore has N coefficient vectors

pixe

l 2

...

..

........

...

. ...... .

...

...

..

........

...

. ...... .

...person 1

person 2

pixel 1

pixe

l 2

...

..

........

...

. ...... .

... ...

..

........

...

. ...... .

...person 1

person 2

pixel 1

© 2002 by M. Alex O. Vasilescu

Page 17: © 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition

Computer Vision September 2003 L1.18© 2003 by Davi Geiger

PIE Database (Weizmann)

Page 18: © 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition

Computer Vision September 2003 L1.19© 2003 by Davi Geiger

EigenImages-Basis Vectors

• Each image bellow is a column vector in the basis matrix B

• PCA encodes encodes the variability across

images without distinguishing between variability in people, viewpoints and illumination

© 2002 by M. Alex O. Vasilescu

Page 19: © 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition

Computer Vision September 2003 L1.20© 2003 by Davi Geiger

Fisher’s Linear Discriminant

• Objective: Find a projection which separates data clusters

Good separation

Poor separation

Page 20: © 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition

Computer Vision September 2003 L1.21© 2003 by Davi Geiger

Fisher Linear Discriminant

• The basis matrix B is chosen in order to maximize ratio of the determinant between class scatter matrix of the projected samples to the determinant within class scatter matrix of the projected samples

• B is the set of generalized eigenvectors of SBtw and SWin corresponding with a set of decreasing eigenvalues

BSB

BSBB

Bin

T

btwT

maxarg

BSBS withinbtw

© 2002 by M. Alex O. Vasilescu

Page 21: © 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition

Computer Vision September 2003 L1.22© 2003 by Davi Geiger

FLD: Data Scatter

• Within-class scatter matrix

• Between-class scatter matrix

• Total scatter matrix

C

c

Tcncn

DW

cn1))(( iiS

i

C

c

TcccB D

1))(( μμμμS

SSS BWT

Page 22: © 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition

Computer Vision September 2003 L1.23© 2003 by Davi Geiger

Fisher Linear Discriminant• Consider a set of images of 2 people under fixed viewpoint & N lighting condition

pixel 1

pixe

l 2..

..

.........

...

. ...... .

... ...

..

........

...

. ...... .

...person 1

person 2

2nd axis

1st axis

• Each image is represented by one coefficient vector

• Each person is displayed in N images and therefore has N coefficient vectors

© 2002 by M. Alex O. Vasilescu