faces algorithm demo

14
VeriLook 4.0/MegaMatcher 3.1 Algorithm Demo User's guide Copyright © 2003-2010 Neurotechnology. All rights reserved.

Upload: srimanta-ghosh

Post on 03-Apr-2015

89 views

Category:

Documents


9 download

TRANSCRIPT

Page 1: Faces Algorithm Demo

VeriLook 4.0/MegaMatcher 3.1 Algorithm Demo

User's guide

Copyright © 2003-2010 Neurotechnology. All rights reserved.

Page 2: Faces Algorithm Demo

User's guide version: 3.1.0.0

Publish date: 9/21/2010

Copyright © 2003-2010 Neurotechnology. All rights reserved.

Page 3: Faces Algorithm Demo

Table of Contents

1 Introduction 11.1 System Requirements 1

2 Face Image Constraints 2

3 Liveness detection 3

4 Matching Threshold and Score 4

5 Application 55.1 Windows 5

5.1.1 Main Window 5

5.1.2 Options Dialog 6

5.1.3 Menu Commands 9

5.1.4 Simple Usage Examples 9

5.2 Linux 10

5.3 Mac OS X 10

Index a

Copyright © 2003-2010 Neurotechnology. All rights reserved.

iii

Page 4: Faces Algorithm Demo

1 Introduction

VeriLook 4.0/MegaMatcher 3.1 Algorithm Demo application is designed with aim to demonstrate the capabilities ofNeurotechnology faces recognition engine. The program is a Windows 2000/XP/Vista/7 compatible GUI application.

Evaluation software supports image acquisition from the external video source (such as Web cameras) via DirectX library. Itcan also read face images from .bmp, .tif, .png, .jpg, .gif files.

The application has 3 operation modes:

1. Enrollment. Software processes the face image, extracts features and writes them to the database.

2. Face enrollment features generalization. This mode generates the generalized face features collection from a number of the face templates of the same person. Each face image is processed and features are extracted. Then collections of features are analyzed and combined into one generalized features collection, which is written to the database. The face recognition quality increases if faces are enrolled using this mode.

3. Matching. This mode performs new face image matching against face templates stored in the database.

1.1 System Requirements Windows requirements:

• 128 MB of RAM, 1Ghz CPU, 9MB HDD space for the installation package.

• Microsoft Windows 2000/XP/Vista/7.

• DirectX 8.1 or later. You can download DirectX upgrade from Microsoft web site.

• Microsoft GDI+. This library is supplied with Windows XP and Windows .NET Server family. If you are using any other modern Windows platform (Windows 98/Me and Windows NT 4.0/2000) you should download and install it from Microsoft web site.

• The Microsoft® XML Parser (MSXML) 3.0 SP4 is required so if it is not already in the system you should download and install it from Microsoft web site.

• (Optionally) Video capture device (web camera).

Linux requirements:

• PC with x86 compatible CPU

• Linux OS 2.6 or newer

• GCC-4.0.x or newer

• pkg-config-0.21.0 or newer

• GNU Make 3.81 or newer

• GTK+ 2.10.x or newer libraries and development packages

• libtiff-3.8.x or newer libraries and development packages

• libusb-0.1.8 or newer libraries.

Mac OS X:

Copyright © 2003-2010 Neurotechnology. All rights reserved.

1

1

Page 5: Faces Algorithm Demo

2 Face Image Constraints

Face recognition accuracy heavily depends on face image quality. Especially, maximum care should be taken in acquiringgood quality facial images in enrollment step. During enrollment one or more (if available) images are converted into facialfeatures template which is usually saved into database and can be further used to rapidly identify or verify the person.

The following basic recommendations and requirements should be taken under consideration:

Face pose and expressions

1. Use near-frontal face images for enrollment with a rotation deviation of up to 10 degrees in any direction (yaw, pitch, roll). VeriLook 4.0/MegaMatcher 3.1 Algorithm Demo supports roll angle deviation up to 180 degrees.

2. Use more than one image for enrollment to cover slightly different face views in yaw and pitch directions. In case of successful coverage, faces up to 25 degrees deviation from frontal can be recognized.

3. Neutral face expression is preferred in the enrollment step. During identification slight changes in facial expression do not influence face recognition accuracy.

4. Persons wearing glasses should use several images - with and without glasses - for enrollment. Thus, during identification they will be recognized in both cases.

5. Glasses with heavy frames and sunglasses will decrease face recognition accuracy.

6. Changes in face appearance because of facial hair should be accommodated in the same manner as wearing of glasses.

Lightning

1. If lighting conditions can be controlled:

1. Lighting should be equally distributed on each side of the face and from top to bottom with no significant shadows within the face region. This can be achieved by using direct frontal light or diffuse lighting.

2. Avoid sunlight or any other additional lighting.

3. Avoid lighting which can produce glares on glasses or even face skin.

Cameras

1. Use cameras of similar quality for both enrollment and identification.

2. Some cameras can be configured to produce mirrored images, i.e. left eye of the persons face can become "right" eye in the face image. Make sure that face orientation is uniform across all the images used in face recognition because facial features template extracted from mirrored face images of the same person will not match with those extracted from not-mirrored ones.

Copyright © 2003-2010 Neurotechnology. All rights reserved.

2

2

Page 6: Faces Algorithm Demo

3 Liveness detection

Faces algorithm is capable to differentiate live faces from non live faces (e.g. photos). "Use liveness check" checkbox and"Liveness threshold" parameter in the options dialog controls the behavior of liveness check. When "Use liveness check"checkbox is marked, the liveness check is performed while matching. That is the liveness score of collected stream iscalculated and checked against the liveness score threshold set in the "Liveness threshold" parameter.

Using liveness check requires a stream of consecutive images. (This check is intended to be used mainly with video streamform a camera). The stream must be at least 10 frames length and the recommended length is 10 - 25 frames. Only oneperson face should be visible in this stream. If the stream does not qualify as "live" and "Extraction failed" message isdisplayed in the log window.

To maximize the liveness score of a face found in an image stream, user should move his head around a bit, tilting it, movingcloser to or further from the camera while slightly changing his facial expression. (e.g. User should start with his headpanned as far left as possible but still detectable by face detection and start panning it slowly right slightly changing his facialexpression until he pans as far right as possible (but still detectable by face detector)).

Copyright © 2003-2010 Neurotechnology. All rights reserved.

3

3

Page 7: Faces Algorithm Demo

4 Matching Threshold and Score

VeriLook 4.0/MegaMatcher 3.1 Algorithm Demo features matching algorithm provides value of features collections score asa result. The higher is score, the higher is probability that features collections are obtained from the same person.

Matching threshold is linked to false acceptance rate (FAR, different subjects mistakenly accepted as of the same) ofmatching algorithm. The higher is threshold, the lower is FAR and higher FRR (false rejection rate, same subjectserroneously accepted as different) and vice a versa.

You can convert matching threshold to FAR (false acceptance rate) and vice versa using this table:

FAR (False acceptance rate) Matching threshold (score)

1 % 24

0.1 % 36

0.01 % 48

0.001 % 60

0.0001 % 72

0.00001 % 84

0.000001 % 96

or using one of these functions:

const double ThFARLogRatio = 12;NDouble MatchingThresholdToFAR(NInt th){ if(th < 0) th = 0; return pow(10.0, -th / ThFARLogRatio + 2);}NInt FARToMatchingThreshold(NDouble f){ if(f > 100.0) f = 100.0; else if(f <= 0.0) f = 1E-100; return Round((-log10(f) + 2) * ThFARLogRatio);}

Copyright © 2003-2010 Neurotechnology. All rights reserved.

4

4

Page 8: Faces Algorithm Demo

5 Application

VeriLook 4.0/MegaMatcher 3.1 Algorithm Demo incorporates Neurotechnology face recognition algorithm. Using this demoapplication face images can be enrolled from still images (from image files) or video streams (from cameras) andidentification task performed.

5.1 Windows VeriLook 4.0/MegaMatcher 3.1 Algorithm Demo application in Windows OS can be started fromFacesAlgorithmDemo.exe.

5.1.1 Main Window

Main application window has three-pane layout, where top pane is used for displaying image or video stream and two bottompanes are used for messages logging. Menu commands and two toolbar buttons, used as shortcuts for most accessedcommands, control application.

Person can be enrolled using "Enroll" command. After enrolling a person's image from file the main window looks like this:

The main window panes are these:

1. Top face detection pane, used to display video or still images and result of face detection algorithm overlaid on image.

Copyright © 2003-2010 Neurotechnology. All rights reserved.

5

5

Page 9: Faces Algorithm Demo

2. Left pane is application log, used for system information and application progress messages.

3. Right pane is matching results pane for listing id of the subject in the database, most similar to matched image. Subjects are considered “similar” if their score value exceeds matching FAR (False acceptance rate) set via Options (Options->Identification) dialog. Also in this pane matching score is displayed.

5.1.2 Options Dialog

Options dialog window can be accessed through menu command File->Options... Options dialog window is used toconfigure faces detection, extraction, enrollment and identification parameters.

Face Detection:

• Minimum IOD – minimum distance between eyes.

• Maximum IOD – maximum distance between eyes.

• Face confidence threshold – value which controls the requirements for face detection. The greater this value is the more strict rules are applied when looking for faces in an image.

• Max roll angle deviation - this value is used to define the maximum value in degrees of rolled face image which can be enrolled or identified.

Face extraction:

Copyright © 2003-2010 Neurotechnology. All rights reserved.

6

5

Page 10: Faces Algorithm Demo

• Face quality threshold – controls how strict rules are applied when determining the quality of a found face for extraction.

Enrollment:

• Template size - size of face image templates. Can be used Large, Medium or Small template. It is recommended to use large template size.

• Frame count – maximum number of frames to process with face detection algorithm while enrolling subject using video camera.

• Max records per template - maximum number of extracted face records to be saved within one face records template.

Identification:

Copyright © 2003-2010 Neurotechnology. All rights reserved.

7

5

Page 11: Faces Algorithm Demo

• Template size - default size of face images templates when performing identification. Can be used Large, Medium or Small template. It is recommended to use medium template size.

• Frame count – maximum number of frames to process with face detection algorithm while identifying subject using video camera. When liveness check is used this value must be at least 10 or more (Recommended range is 10 - 25 )!

• FAR - threshold that separates identical and different subjects. Matching threshold is linked to false acceptance rate (FAR, different subjects erroneously accepted as of the same) of matching algorithm. The higher is threshold, the lower is FAR and higher FRR (false rejection rate, same subjects erroneously accepted as different) and vice a versa.

• Speed – defines identification speed. When the most accurate identification results are required it is recommended to use Low speed. But in this case identification task is performed slower. If the maximum identification speed is required it is recommended to use High identification speed. Note: Template size also affects identification speed. The highest possible identification speed is achieved when Small template and High speed is used.

• Use liveness check – Controls if liveness check ( see page 3) is used while matching.

• Liveness threshold – controls the requirements for live faces. The greater this value is the more strict rules are applied to check if face in an image stream is live. (If this value is set to 0 all faces are considered to be live).

Misc:

Copyright © 2003-2010 Neurotechnology. All rights reserved.

8

5

Page 12: Faces Algorithm Demo

• Flip image horizontally – mirror horizontally image received from video camera.

5.1.3 Menu Commands

The following table explains menu commands of the Faces Algorithm Demo application.

Menu command Description

Source » ”Camera name” Choose selected camera as video source.

Source » File Select an image file as a source.

Jobs » Enroll Enroll image to face database.

Jobs » Identify Search for matching image in face database.

Tools » Clear logs Clear application log windows.

Tools » Empty database Empty face database.

Tools » Options… Display options dialog.

Help » About Display information about Faces algorithm demo application.

5.1.4 Simple Usage Examples

In this section simple basic scenarios of using VeriLook 4.0/MegaMatcher 3.1 Algorithm Demo application are described in astep by step fashion.

Enrolling from camera

1. First, camera to be used as the capture device must be selected from "source" menu in the toolbar. After that camera video input should be visible in the upper pane of the program.

2. Faces found in the video stream are outlined in the capture image by green rectangle (the green rectangle outlines the face that best fits the matching requirements of the Neurotechnology Faces algorithm in addition this face has its eyes

Copyright © 2003-2010 Neurotechnology. All rights reserved.

9

5

Page 13: Faces Algorithm Demo

marked by the program).

3. To enroll a face from a video stream, "enroll" button in the toolbar can be used or option "enroll" from a system menu "jobs" can be selected. For this operation to succeed at least one face in the image must be present. Program will process a few frames and will enroll face into the database of the demo program from these frames and a dialog asking for the person to be enrolled id will be shown.

Matching from camera

1. First, camera to be used as the capture device must be selected from "source" menu in the toolbar. After that camera video input should be visible in the upper pane of the program.

2. Faces found in the video stream are outlined in the capture image by colorful rectangles (the green rectangle outlines the face that best fits the matching requirements of the Neurotechnology Faces algorithm in addition this face has its eyes marked by the program, and yellow rectangles show other faces found in the image).

3. To identify a face "Identify" button must be clicked or option "Identify" must be selected from a system menu "jobs". After this the face, that best suits the matching requirements of the Neurotechnology Faces algorithm (it should be outlined by a green rectangle in the video input pane) will be matched against the database of the demo program and most probable candidate will be displayed in the bottom right pane of the program window.

Enrolling from file

1. First, file input as the capture device must be selected from "source" menu in the toolbar.

2. To enroll a face from a file "enroll" button must be pressed or "enroll" option selected from the system menu "jobs". After that a file open dialog should open in which a file to be opened must be selected. The image in the file will be displayed in the upper pane of the window, with the found face outlined by a green rectangle (if such rectangle is absent it means that no face suitable for enrollment was found in the image) and a dialog asking for the person to be enrolled id will be shown. The outlined face will be enrolled to the demo program database.

Matching from file

1. First, file input as the capture device must be selected from "source" menu in the toolbar.

2. To identify a face from a file "match" button must be pressed or "match" option selected from the system menu "jobs". After that a file open should open in which a file to be opened must be selected. The image in the file will be displayed in the upper pane, with the found face outlined by a green rectangle (if such rectangle is absent, it means that no face, suitable for matching was found in the image). The outlined image will be matched against the demo program database and most probable candidate will be displayed in the bottom right pane of the window.

5.2 Linux VeriLook 4.0/MegaMatcher 3.1 Algorithm Demo application on Linux can be started from FacesAlgorithmDemo file. Thereare separate distribution for 32 bits and 64 bits operating system. See Windows ( see page 5) part to find out how to usethis application.

5.3 Mac OS X VeriLook 4.0/MegaMatcher 3.1 Algorithm Demo application on Mac OS can be started from FacesAlgorithmDemo.appfile. See Windows ( see page 5) part to find out how to use this application.

Copyright © 2003-2010 Neurotechnology. All rights reserved.

10

5

Page 14: Faces Algorithm Demo

Index

AApplication 5

FFace Image Constraints 2

IIntroduction 1

LLinux 10

Liveness detection 3

MMac OS X 10

Main Window 5

Matching Threshold and Score 4

Menu Commands 9

OOptions Dialog 6

SSimple Usage Examples 9

System Requirements 1

WWindows 5

Copyright © 2003-2010 Neurotechnology. All rights reserved.

a