chapter 3 study of existing unimodal biometric...
TRANSCRIPT
![Page 1: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/1.jpg)
Page 45
CHAPTER – 3
STUDY OF EXISTING UNIMODAL BIOMETRIC SYSTEMS AND DESIGNING
AND DEVELOPMENT OF VARIOUS UNIMODAL BIOMETRIC SYSTEMS
3. 1 Unimodal Biometric Techniques
Biometric techniques can be divided into two categories based on no. of traits which are
considered to decide identity of a person. Biometric techniques which are using single
traits for identification or verification of person are known as Unimodal Biometric
Techniques. Biometric techniques which are using multiple algorithms, multiple traits,
multiple sensors or multiple samples are known as Multibiometric Techniques.
Biometric systems are categorized in two categories based on traits used for person
identification. Physiological biometric systems judge the person based on physical
characteristics of human being. Behavioral biometric systems judge the person based on
behavioral characteristics of human being. Example physiological biometric systems are:
1. Fingerprint recognition
2. Face recognition
3. Iris recognition
4. Retina scanning
5. Hand geometry
6. Palmprint recognition
Example behavioral biometric systems are:
1. Voice recognition
2. Gait recognition
3. Keystroke recognition
In this research, we are concentrating on fingerprint and face recognition techniques, so
in this chapter, we will take review of only physiological biometrics systems. We have
studied fingerprint and face recognition systems in detail in chapter 2, so here we will
take review of other unimodal biometric systems like Iris recognition, Retinal scanning,
hand geometry etc. Other physiological biometric systems are not so popular in
implementation. So, we will restrict overview with above mentioned three technologies.
![Page 2: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/2.jpg)
Page 46
3.1.1 Iris recognition
Iris recognition is one of the most accurate biometric systems now a day. By considering
this feature, iris recognition has been successfully implemented in ATMs and kiosks for
banking and travel applications.
Iris recognition system contains front end acquisition hardware along with local or central
processing software. Compared to facial recognition, iris recognition required specialized
devices which provide infrared illumination.
The software components of iris system – image processing and matching systems,
database will reside on local PC along with attached device or can reside on central
system. In large scale system, central server will do the work of matching templates and
storing database. The local system will do the work of acquiring sample and generating
template. The central device and local PC will communicate image template instead of
image itself.
Based on the results of matching physical or logical access to the resources is granted.
3.1.1.1 History of iris recognition [1]
1936: US ophthalmologist Frank Burch suggests the idea of recognizing people
from their iris patterns long before technology for doing so is feasible.
1981: American ophthalmologists Leonard Flom and Aran Safir discuss the idea
of using iris recognition as a form of biometric security, though technology is still
not yet advanced enough.
1987: Leonard Flom and Aran Safir gain US patent #4,641,349 for the basic
concept of an iris recognition system.
1994: US-born mathematician John Daugman (currently a professor of computer
science at Cambridge University, England) works with Flom and Safir to develop
the algorithms (mathematical processes) that can turn photographs of irises into
unique numeric codes. He is granted US patent #5,291,560 for a "biometric
personal identification system based on iris analysis" the same year. Daugman is
widely credited as the inventor of practical iris recognition since his algorithm is
used in most iris-scanning systems.
![Page 3: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/3.jpg)
Page 47
1996: Lancaster County Prison, Pennsylvania begins testing iris recognition as a
way of checking prisoner identities.
1999: Bank United Corporation of Houston, Texas converts supermarket ATMs
to iris-recognition technology.
2000: Charlotte/Douglas International Airport in North Carolina and Flughafen
Frankfurt Airport in Germany become two of the first airports to use iris scanning
in routine passenger checks.
2006: Iris-scanning systems are installed at British airports, including Heathrow,
Gatwick, Birmingham, and Stansted. Privacy concerns notwithstanding, hundreds
of thousands of travelers voluntarily opt to use the machines to avoid lengthy
passport-checking queues.
3.1.1.2 Uniqueness of iris recognition
The iris is the colored ring of muscle that opens and shuts the pupil of the eye like a
camera shutter. The colored pattern of our irises is determined genetically when we're in
the womb but not fully formed until we're aged about two. It comes from a pigment
called melanin—more melanin gives you browner eyes and less produces bluer eyes. The
color and pattern of people's eyes is extremely complex and completely unique: the
patterns of one person's two eyes are quite different from each other and even genetically
identical twins have different iris patterns.
Figure 3.1: Iris image
![Page 4: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/4.jpg)
Page 48
3.1.1.3 Working of iris recognition
To get past an iris-scanning system, the unique pattern of your eye has to be recognized
so you can be positively identified. That means there have to be two distinct stages
involved in iris-scanning: enrollment (the first time you use the system, when it learns to
recognize you) and verification/recognition (where you're checked on subsequent
occasions).
Enrollment
First, all the people the system need to know about have to have their eyes scanned. This
one-off process is called enrollment. Each person stands in front of a camera and has their
eyes digitally photographed with both ordinary light and invisible infrared (a type of light
used in night vision systems that has a slightly longer wavelength than ordinary red
light). In iris recognition, infrared helps to show up the unique features of darkly colored
eyes that do not stand out clearly in ordinary light. These two digital photographs are then
analyzed by a computer that removes unnecessary details (such as eyelashes) and
identifies around 240 unique features (about five times more "points of comparison" as
fingerprint systems use). These features, unique to every eye, are turned into a simple,
512-digit number called an IrisCode® that is stored, alongside your name and other
details, in a computer database. The enrollment process is completely automatic and
usually takes no more than a couple of minutes. [1]
Figure 3.2: Iris scanners
![Page 5: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/5.jpg)
Page 49
Verification
Once you're stored in the system, it's a simple matter to check your identify. You simply
stand in front of another iris scanner and have your eye photographed again. The system
quickly processes the image and extracts your IrisCode®, before comparing it against the
hundreds, thousands, or millions stored in its database. If your code matches one of the
stored ones, you're positively identified; if not, tough luck! It either means you're not
known to the system or you're not whom you claim to be. [1]
3.1.1.4 Strengths and weaknesses of iris recognition
Strengths of iris recognition:
1. It has the potential for exceptionally high levels of accuracy.
2. It is capable of reliable identification as well as verification.
3. It maintains stability of characteristic over a lifetime.
Weaknesses of iris recognition:
1. Acquisition of the image requires moderate training and attentiveness.
2. It has a propensity for false rejection.
3. A proprietary acquisition device is necessary for deployment.
4. There is some user discomfort with eye-based technology
3.1.1.5 Applications of iris recognition
The accuracy of the technology appeals to high-security applications such as military and
national infrastructure; and its remote-acquisition capability and ease of use lend
themselves to high-throughput technology and screening applications such as airports and
checkpoints.
The United Arab Emirates uses iris-recognition technology to screen all incoming visitors
against a list of thousands of persons who have been expelled from the UAE. Border
authorities have done 200 billion cross-comparisons between IrisCodes and have caught
46,000 persons illegally attempting to reenter the UAE — with no false matches.
In the United States, the Child Project is an iris-based system for helping to identify and
return missing children; as of September 2007, 1,400 sheriff‘s offices were participating.
The company that supplies the technology for the Child Project (BI2 Technologies) also
![Page 6: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/6.jpg)
Page 50
supplies Senior Safety Net and the Inmate Recognition and Identification System (IRIS).
The U.S. is using iris-recognition technologies in Iraq to control access to facilities, but
has so far resisted the temptation to do more than capture facial images on passports.
3.1.2 Retina scanning
Retina-scan technology utilizes the distinctive characteristic of the retina—the surface on
the back of the eye that processes light entering through the pupil— for identification and
verification. Developed in the 1980s, retina-scan is one of the most well-known biometric
technologies, but is also one of the least deployed.
Retina-scan devices are used exclusively for physical access applications and are usually
used in environments requiring exceptionally high degrees of security and accountability
such as high-level government, military, and corrections applications.
Retina-scan and iris-scan are often mistakenly confused with one another or grouped into
a single category referred to as eye biometrics. The two technologies differ substantially.
They measure different physiological features, the software and algorithm technology is
very different, iris- and retina-scan hardware and software are dissimilar, and the
situations in which they can be successfully deployed differ.
3.1.2.1 History of retina scan
Retina biometrics distinguishes individuals by using the patterns of veins occurring in the
back of the eye. A 1935 study by Drs. C. Simon and I. Goldstein first observed the
individually distinguishing characteristics of retinal vascular patterns. Automated
techniques to capture and process retinal patterns for recognition were developed in the
1970s along with the first wave of other early pioneering efforts in digital imaging.
Established in 1976, EyeDentify of Baton Rouge, Louisiana, made retinal scanning
commercially available for access control in the early 1980s [13].
3.1.2.2 Uniqueness of retina scan
The retina‘s intricate network of blood vessels is a physiological characteristic that
remains stable throughout the life of a person. As with fingerprints and iris patterns,
genetic factors do not determine the exact pattern of blood vessels in the retina. This
![Page 7: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/7.jpg)
Page 51
allows retina-scan to differentiate between identical twins and provide robust
identification. The retina contains at least as much individual data as a fingerprint, but,
unlike a fingerprint, is an internal organ and is less susceptible to either intentional or
unintentional modification. Certain eye-related medical conditions and diseases, such as
cataracts and glaucoma, can render a person unable to use retina-scan technology, as the
blood vessels can be obscured [2].
3.1.2.3 Working of retina scan
The retina scan works with image acquisition, identifying distinctive features and
template generation and matching [4].
Image acquisition
Since the retina is small, internal, and difficult to measure without the proprietary
hardware and camera systems specifically designed for retina imaging, image acquisition
is a very difficult process.
Figure 3.4: Retina scanner
In order for the unit to acquire retina images, the user first positions his or her eye very
close to the unit‘s embedded lens, with the eye socket resting on the sight. Beneath the
lens, within the device itself is an imaging component consisting of a small green light
Figure 3.3: Retina
![Page 8: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/8.jpg)
Page 52
against a white background. The user views this light through the lens; when triggered,
the light moves in a tight circle, measuring the retinal patterns through the pupil. In order
for a retinal image to be acquired, the user must gaze directly into the lens, remaining
perfectly still while focusing on the imaging component. Any movement defeats the
image acquisition process and requires that the imaging process be triggered again. A
small camera captures an image of the retina through the pupil. The acquisition of a
single retina image takes 4 to 5 seconds under ideal conditions. During enrollment,
between three and five acceptable images must be acquired. Since the first one or two
images acquired are almost invariably rejected due to excessive movement, the
enrollment process can be relatively lengthy. Including the time to trigger the process,
respond to system prompts, and acquire sufficient images, enrollments can easily take
over 1 minute. Many users cannot enroll at all, even after several minutes. On the other
hand, it is possible for highly acclimated users to be identified within 2 to 3 seconds—the
identification process is much quicker [4].
Distinctive feature
The retina‘s intricate network of blood vessels is distinctive feature. Even identical twins
have different patterns of blood vessels [4].
Template generation and matching
Once a device captures a retinal image, the software compiles the unique features of the
network of retinal blood vessels into a template. Retina-scan algorithms require a high-
quality image and will not let a user enroll or verify until the system is able to capture an
image of sufficient quality. The retina template generated by the originator of the
technology is a mere 96 bytes, one of the smallest of any biometric technology.
Retina-scan has robust matching capabilities and is typically configured to do one-to-
many identification against a database of users. However, because quality image
acquisition is so difficult, many attempts are often required to get to the point where a
match can take place. While the algorithms themselves are robust, it can be a difficult
process to provide sufficient data for matching to take place. In many cases, a user may
be falsely rejected because of an inability to provide adequate data to generate a match
template [4].
![Page 9: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/9.jpg)
Page 53
3.1.2.4 Strengths and weaknesses of retina scan
Strengths include the following:
1. It is highly accurate.
2. It uses a stable physiological trait.
3. It is very difficult to spoof.
Weaknesses include the following:
1. It is very difficult to use.
2. There is some user discomfort with eye-related technology.
3. It has limited applications.
3.1.2.5 Applications of retina scan
Retinal scan devices are mainly used for physical access applications and are usually
used in environments requiring exceptionally high degrees of security and accountability
such as high-level government, military, and corrections applications. Retinal scanning
has been utilized by several U.S. government agencies including the Federal Bureau of
Investigation (FBI), the Central Intelligence Agency (CIA), and NASA.
Retinal scanning is also used for medical diagnostic applications. Examining the eyes
using retinal scanning can aid in diagnosing chronic health conditions such as congestive
heart failure and atherosclerosis.
3.1.3 Hand geometry
Today, the human hand has another use, a media to verify identity. Ancient Egyptians
used body measurements to classify and identify people. Today‘s hand geometry
scanners use infrared optics and microprocessor technology to quickly and accurately
record and compare hand dimensions [5].
3.1.3.1 History of Hand geometry
Several hand geometry verification technologies have evolved during this century. They
range from electro-mechanical devices to the solid state electronic scanners being
manufactured today. The U.S. Patent office issued patents to Robert P. Miller in the late
1960‘s and early 1970‘s for a device that measures hand characteristics, and records
![Page 10: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/10.jpg)
Page 54
unique features for comparison and ID verification. Miller‘s machines were highly
mechanical and manufactured under the name ―Identimation.‖ Several other companies
launched development and manufacturing efforts during the 70‘s and early 80‘s. In the
mid- 1980‘s, David Sidlauskas developed and patented an electronic hand scanning
device and established the Recognition Systems, Inc. of Campbell, California in 1986.
The first applications for hand scanners were as access control components. Government
and nuclear facilities used them to protect their facilities [5].
3.1.3.2 Working of hand geometry
Today, hand scanners perform a variety of functions including access control, employee
time recording and point-of sale applications.
Figure 3.5: Hand geometry scanner
Each human hand is unique. Finger length, width, thickness, curvatures and relative
location of these features distinguish every human being from every other person. The
hand geometry scanner uses a charge coupled device (CCD) camera, infrared light
emitting diodes (LEDs) with mirrors and reflectors to capture black and white images of
the human hand silhouetted against a thirty-two thousand pixel field. The scanner records
no surface details, ignoring fingerprints, lines, scars and color. The process is much like
placing a hand on a beaded projector screen. The hand scanner reads the hand shape by
recording the silhouette of the hand. In combination with a side mirror and reflector, the
![Page 11: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/11.jpg)
Page 55
optics produces two distinct images, one from the top and one from the side. This method
is known as orthographic scanning [5].
Figure 3.6: Measurement of different parameters
Scanners typically use an optical path approximately 11 inches between the camera and
the platen. An optical path folded with mirrors reduces the space required to half the
original length. Enclosing the optical path in a structure results in the typical hand
geometry scanner that is approximately 8-1/2 inches square by 10‖ high. The scanner
takes ninety-six measurements of the user‘s hand. A microprocessor and internal software
convert the measurements to a nine-byte ‗‘template‖ that it stores for later comparison.
The process of recording a user‘s hand template is known as enrollment. During the
enrollment session, the scanner prompts the enrollee to place his or her hand on the
scanner platen three consecutive times. The platen is the highly reflective surface that
projects the silhouetted hand image. Pins projecting from the platen surface position the
enrollee‘s fingers to assure accurate image capture. The hand geometry scanner
mathematically averages the three templates and generates an accurate template that the
scanner stores in resident memory. To verify, the user enters a personal identification
number (PIN) in the scanner through the use of a keypad or other data entry device. The
scanner retrieves his or her individual template for comparison. The user places his or her
hand on the scanner. The hand image is captured and a representation is derived using the
same steps as those used for generating the template at the time of enrollment. The
representation thus derived is compared to the stored template. The comparison may
involve, for instance, accumulation of absolute differences in the individual features in
![Page 12: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/12.jpg)
Page 56
the input representation and the stored template. The comparison typically results in a
single number indicating the strength of the similarity (score) or their difference
(distance). A predetermined threshold determines whether the score/distance is
acceptable to consider the input representation and stored templates are ―matched‖. The
match/no-match decision controls the output of the scanner.
3.1.3.3 Implementation issues of hand geometry
Hand geometry technology faces some implementation issues to be considered, which are
given as below:
1. Card reader emulation
2. Stand alone access control
3. Privacy issues
4. Operation by disabled person
5. Outdoor conditions
3.1.3.4 Strengths and weaknesses of hand geometry
Strengths include the following:
1. Ease of use
2. Resistant to fraud
3. Template size
4. User perception
Weaknesses include the following:
1. Static design
2. Cost
3. Injuries to hands
4. Accuracy
![Page 13: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/13.jpg)
Page 57
3.1.3.5 Applications of hand geometry
Hand geometry biometric systems have variety of applications. The domains are many
where it can be used. The list of areas is given below:
1. Parking lot application
2. Cash vault application
3. Dual custody application
4. Anti-passback
5. Time and attendance
6. Point of sale applications
7. Interactive kiosks
3.1.4 Limitations of unimodal biometric systems
3.1.4.1 Reasons for failure of different unimodal biometric systems
Fingerprint
1. Cold finger
2. Dry/oily finger
3. High or low humidity
4. Angle of placement
5. Pressure of placement
6. Location of finger on platen (poorly placed core)
7. Cuts to fingerprint
8. Manual activity that would mar or affect fingerprints (construction, gardening)
Facial recognition
1. Change in facial hair
2. Change in hairstyle
3. Lighting conditions
4. Adding/removing hat
5. Adding/removing glasses
6. Change in weight
7. Change in facial aspect (angle at which facial image is captured)
![Page 14: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/14.jpg)
Page 58
8. Too much or too little movement
9. Quality of capture device
10. Change between enrollment and verification cameras (quality and placement)
11. ‗Loud‘ clothing that can distract face location
Iris-scan
1. Too much movement of head or eye
2. Glasses
3. Colored contacts
Retina-scan
1. Too much movement of head or eye
2. Glasses
Hand geometry
1. Jewelry
2. Change in weight
3. Bandages
4. Swelling of joints
3.1.4.2 Limitations of unimodal biometric systems
While unimodal biometric systems have advantages over password or token based
approaches, they have several challenges also. Here are some challenges of unimodal
biometric systems [6]:
1. Noise in the sensed data
A fingerprint image with a scar, or a voice sample altered by cold are examples of
noisy data. Noisy data may also result from defective or improperly maintained
sensors or unfavorable ambient conditions. Noisy biometric data may not be
successfully matched with corresponding templates in the database, resulting in a
genuine user being incorrectly rejected.
2. Intra-class variations
Intra-class variations in biometric systems are typically caused by an individual who
is incorrectly interacting with the sensor (e.g., incorrect facial pose), or due to
![Page 15: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/15.jpg)
Page 59
changes in the biometric characteristics of a person over a period of time (e.g., change
in hand geometry). These variations can be handled by storing multiple templates for
every user and updating these templates over time. Template update is an essential
ingredient of any biometric system since it accounts for changes in a person's
biometric with the passage of time. The face, hand and voice modalities, in particular,
can benefit from suitably implemented template update mechanisms.
3. Inter-class similarities
Inter-class similarity refers to the overlap of feature spaces corresponding to multiple
classes or individuals. In an identification system comprising of a large number of
enrolled individuals, the interclass similarity between individuals will increase the
false match rate of the system. Therefore, there is an upper bound on the number of
individuals that can be effectively discriminated by the biometric system.
4. Non-universality
The biometric system may not be able to acquire meaningful biometric data from a
subset of users. A fingerprint biometric system, for example, may extract incorrect
minutia features from the fingerprints of certain individuals, due to the poor quality of
the ridges. Thus, there is a failure to enroll (FTE) rate associated with using a single
biometric trait.
5. Interoperability issues
Most biometric systems operate under the assumption that the biometric data to be
compared are obtained using the same sensor and, hence, are restricted in their ability
to match or compare biometric data originating from different sensors. For example, a
speaker recognition system may find it challenging to compare voice prints
originating from two different handset technologies such as electret and carbon-
button.
6. Spoof attacks
Spoofing involves the deliberate manipulation of one's biometric traits in order to
avoid recognition or the creation of physical biometric artifacts in order to take on the
identity of another person. This type of attack is especially relevant when behavioral
traits such as signature and voice are used. However, physical traits such as
fingerprints and iris are also susceptible to spoof attacks. Spoof attacks, when
![Page 16: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/16.jpg)
Page 60
successful, can severely undermine the security afforded by a biometric system.
There are several ways to address issues related to spoofing. In the case of physical
traits, such as fingerprint and iris, a liveness detection scheme may be used to detect
artifacts; in the case of behavioral traits, a challenge-response mechanism may be
employed to detect spoofing.
7. Other vulnerabilities
A biometric system is vulnerable to a broad range of attacks. Ratha et al., 2001
identified several levels of attacks that can be launched against a biometric system: (i)
a fake biometric trait such as an artificial finger may be presented at the sensor, (ii)
illegally intercepted biometric data may be resubmitted to the system, (iii) the feature
extractor may be replaced by a Trojan horse program that produces pre-determined
feature sets, (iv) legitimate feature sets may be replaced with synthetic feature sets,
(v) the matcher may be replaced by a Trojan horse program that always outputs high
scores thereby defying system security, (vi) the templates stored in the database may
be modified or removed, or new templates may be introduced in the database, (vii)
the data in the communication channel between two modules of the system may be
altered, and (viii) the final decision output by the biometric system may be
overridden.
3.2 Creation of database for face and fingerprint recognition system
In the first phase of research, the researcher prepared two unimodal systems viz. face and
fingerprint recognition system. As a part of development of both unimodal systems, first
step is to create master database for both unimodal systems. Here is the directory
structure for storage of image database of 30 persons. The researcher has considered 30
persons for preparation of face and fingerprint database. Here the researcher has
considered 25 students and 5 faculty members. First 25 persons are students of
Department of Computer Science, Saurashtra University, Rajkot. Last 5 persons are
faculty members and the researcher himself. The details of persons are shown below:
Sr. No. Name of student / faculty Gender Folder name Index of the person
1 Chandarana Niyati F s1 101
2 Vaishnav Kairavi F s2 102
3 Kalariya Monali F s3 103
![Page 17: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/17.jpg)
Page 61
4 Dave Malvika F s4 104
5 Kambariya Jigna F s5 105
6 Shekhda Ravina F s6 106
7 Bhagiya Divya F s7 107
8 Rathod Riddhi F s8 108
9 Kalariya Manali F s9 109
10 Gohel Ruchika F s10 110
11 Padiya Seema F s11 111
12 Kavar Dipali F s12 112
13 Maru Arjun M s13 113
14 Chavda Gaurav M s14 114
15 Bhalodiya Pratik M s15 115
16 Dodiya Chirag M s16 116
17 Shah Parth M s17 117
18 Gosai Kaushikgiri M s18 118
19 Makwana Nayan M s19 119
20 Kapadiya Dhaval M s20 120
21 Shah Moin M s21 121
22 Dangar Jaydev m s22 122
23 Chirag Gusani M s23 123
24 Vyas Abhay M s24 124
25 Chavda Ravi M s25 125
26 Divyakant Meva M s26 126
27 Apurva Pandya M s27 127
28 Shital Rakangor F s28 128
29 Dr C K Kumbharana M s29 129
30 Hetal Thaker F s30 130
Table 3.1: Person details of face and fingerprint database with directory name structure
Before describing the model of building database, here is the description of devices used
for the said purpose.
3.2.1 Device used for capturing face samples
The researcher has used Dell 1525 Inspiron system‘s inbuilt camera for capturing facial
images of different persons. The camera has been manufactured by Creative
Technologies Ltd. The camera captures image of the size 320x240 pixels. Then image is
converted into gray scale image. 320x240 pixels images are then resized with 112x92
pixels. The image is saved in .pgm format as it occupies less size. The figure of the
device is shown in figure 3.7.
![Page 18: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/18.jpg)
Page 62
3.2.2 Device used for capturing fingerprint samples
The researcher has used Digital Persona U.are.U 4000 scanner. The specifications are
shown below in figure 3.8. The device captures image in .jpg format with resolution of
328x356. The device is shown in figure 3.9.
Figure 3.7: Dell webcam used for capturing face samples
Figure 3.8: Digital Persona U.are.U 4000 / 4000S / 4000B specifications
Figure 3.9: Digital Persona U.are.U 4000 used for capturing fingerprint samples
![Page 19: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/19.jpg)
Page 63
3.2.3 Directory structure for storage of captured face and fingerprint samples
In this research, 30 persons have been considered for database creation. The details are
shown in table 3.1. For each person, 10 samples are captured for face and fingerprint
each. The face samples are taken with various postures as well as based on presence of
spectacles. Few samples are captures with expressions. The reason is to identify whether
system is capable of identifying person even if he has variations in spectacles and
expressions. For each person total 20 images have been captured. Total 600 images –
samples are there in database.
Total number of persons: 30
Number of faces per person: 10
Number of fingerprints per person: 10
Total number of face and fingerprint images per person: 10 + 10= 20
Total no. of images in face database: 10 * 30 = 300
Total no. of images in fingerprint database: 10 * 30 = 300
Total no. of images in face and fingerprint database: 300 + 300 = 600
The directory structure is shown and explained here.
Here there are two directories viz. Face database and Fingerprint database. Under these
directories, 30 subdirectories are there in each directory. The structure for face database
directory is shown in figure 3.10. The naming convention is here: name of the directory
starts with ‗s‘ and is clubbed with the number 1 to 30 based on index of person. The
example directory name is ‗s1‘ for the first person. And will continue up to ‗s30‘ for
thirtieth person. The structure of sub directory is shown in figure 3.11. Every
subdirectory indeed contains 10 samples. The name of each image is generated in the
following manner. The name of sample starts with index number of person i.e. ‗101‘ and
followed by ‗_‘ and sample number starting from 1 to 10. Example structure for face
recognition database is given below:
Face recognition s1101_1.pgm
The sample image file structure is shown in figure 3.12.
Similarly for fingerprint recognition database structure is given below:
Fingerprint recognition s1 101_1.jpg
The sample image file structure is shown in figure 3.13, 3.14 and 3.15.
![Page 20: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/20.jpg)
Page 64
Figure 3.10 Base directory structures for face database
Figure 3.11: Sub directory structure for face database directory
Figure 3.12: File structure for ‗s1‘ sub directory in ‗Face database‘ directory
![Page 21: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/21.jpg)
Page 65
Figure 3.13 Base directory structures for fingerprint database
Figure 3.14: Sub directory structure for fingerprint database directory
Figure 3.15: File structure for ‗s1‘ sub directory in ‗Fingerprint database‘ directory
![Page 22: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/22.jpg)
Page 66
3.2.4 Model for database creation
The researcher has prepared GUI with which database preparation will be easier. The
device details are shown above. The model for preparing database is shown below:
Figure 3.16: Model for creation of master databases
The model works in the following manner:
Initially, the user requires executing IDE for capturing fingerprint and face. Then by
pressing button, first webcam will be initiated and face sample will be captured. After
capturing face sample, demo.exe file from ZKFinger SDK will be executed to capture
fingerprint image and both the images will be renamed as per the requirement of user and
will be stored in respective directories created automatically by system. After creating
base database of face and fingerprint images, next is to prepare master train database of
face and fingerprint images. The process of creating master train database is mentioned in
section 3.2.5.
The master database which is created at the end of process contains 60 face sample (2
samples of each person – 30x 2) and 30 fingerprint samples (1 sample of each person
30x1).
The steps to be followed to create database with GUI are shown below:
Step-1: Load IDE for capturing face and fingerprint samples. Click on the button
„Capture face and fingerprint‟
Main engine to
create database
Database of 60
face and 30
fingerprint
templates
![Page 23: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/23.jpg)
Page 67
Figure 3.17: Load IDE for capturing face and fingerprint samples
Execute capturefaceandfinger.m for execution of IDE to capture face and fingerprint
samples. The IDE contains two axes controls and two edit box controls, three buttons.
Axes controls shows face and fingerprint images. Then click on the button ‗Capture face
and fingerprint‘.
Step-2: Dell webcam will capture face sample and will load in IDE
On clicking ‗Capture face and fingerprint‘, Dell webcam will be initiated and will capture
face sample and will save that sample image as ‗temp.pgm‘ in root folder. The face
sample then will be loaded in axes control1. The GUI is shown in figure 3.18.
Step-3: demo.exe from ZKFinger SDK will be loaded to capture
After loading face sample, demo.exe file of ZKFinger SDK will be executed. The GUI of
demo.exe is shown in Figure 3.19. Click ‗Connect Sensor‘ button to initiate fingerprint
sensor. ZKFinger SDK is the one, through which it possible to build various fingerprint
recognition application. Digital Persona‘s U.are.U 4000 sensor can be connected and
used to capture fingerprint samples with demo.exe of ZKFinger SDK.
Axes control 1 Axes control 2
Edit control 1 Edit control 2
![Page 24: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/24.jpg)
Page 68
Figure 3.18: IDE loaded with face sample captured by Dell webcam
Figure 3.19: Execution of demo.exe of ZKFinger IDE
Step-4: Select image format and ZKFinger 9.0 version
![Page 25: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/25.jpg)
Page 69
Figure 3.20: Captured fingerprint sample image in demo.exe GUI
After loading demo.exe GUI and connecting sensor, select ZKFinger 9.0 and Image
format ‗.jpg‘. On putting finger on U.are.U 4000 fingerprint sensor, it will be visible in
demo.exe GUI.
Step-5: Click button „Save Image‟ and close demo.exe
After selecting appropriate options, click on ‗Save Image‘ button on demo.exe GUI to
save captured fingerprint sample. The fingerprint sample will be saved as
‗fingerprint.jpg‘ in root folder. The GUI is shown in figure 3.21. Then close demo.exe
GUI. Control will be passed to IDE and fingerprint sample will be loaded on axes
control2 of IDE.
Step-6: Enter person index in editbox 1 and sample index in editbox 2
In the next step, enter person index in edit box1 and sample index in edit box2. As per the
directory structure, person index will start from 101 and sample index will start from 1.
![Page 26: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/26.jpg)
Page 70
Figure 3.21: On exiting demo.exe, fingerprint sample loaded in IDE
Step-7: Click on „Rename files‟ button
Figure 3.22: Click on ‗Rename files‘ button
![Page 27: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/27.jpg)
Page 71
After entering person index and sample index, click Rename files‘ button. On clicking
this button, face.pgm and fingerprint.jpg files from root folder will renamed as the
filename generated by combining person index and sample index, i.e. 131 as person index
and 1 as sample index then ‗131_1.pgm‘ for face.pgm and ‗131_1.jpg‘ for
fingerprint.jpg. Once renaming files, sample index will automatically change to 2 in edit
box2. After completing 10 samples for one person, person index will automatically
increase to next index i.e. from ‗131‘ to ‗132‘. Both the files will automatically move to
the folder with person index name. Figure 3.22 and 3.23 shows this representation.
Figure 3.23: Sample index changes automatically and files will be renamed
Step-8: Click on „Clear images‟ button
After completing the operations of capturing face and fingerprint samples and renaming
files and moving them to respective folders, click on ‗Clear images‘ button to clear the
content of axes controls. Figure 3.24 shows this representation of operation.
After completing sample capturing operations for face and fingerprint for all persons,
prepare directory structure shown in 3.10, 3.11, 3.13 and 3.14. Move all face and
![Page 28: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/28.jpg)
Page 72
fingerprint images in their respective directories and subdirectories. For example,
samples of person index ‗101‘ will be moved to ‗s1‘ subdirectory of Face recognition and
Fingerprint recognition directories. The final file structure is shown in 3.12 and 3.15.
Figure 3.24: Click ‗Clear images‘ button to clear content of axes control
3.2.5 Identify train and test samples for master database creation
Once completing preparation of sample images database, next step is to identify sample
images for master database, which are also called train samples. This step is shown as
step-3 in database creation model in figure 3.16. Identifying train samples for face master
database, the researcher has adopted novel approach, which has been discussed hereafter.
For identifying train samples for master fingerprint database, the researcher identified
good quality fingerprint sample images manually. And db.mat is created by extracting
minutiae features from train fingerprint sample images and creating templates, which are
stored in .mat file with respective person index number. The structure of fingerprint
database files is shown like: person index e.g. 101 which is followed by ‗_‘ and again
followed by ‗1‘.
![Page 29: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/29.jpg)
Page 73
e.g. 101_1 and so on, up to 101_30
3.2.5.1 Identifying train face samples for master database
The initial database is created by taking 30 objects with each enrolled with 10 samples of
face. Total no. of samples is 300. The problem here is how to identify samples for
training database and testing database. We planned to keep 2 samples with training
database and 8 samples in testing database for each person.
Figure 3.25: PCA based face recognition model
Initially two sample images of each person are included in train database. As we have 30
persons, 60 images would be there in train database. Features are extracted from images
by applying Eigenface method. Out of 300 images, rest 240 images are added in test
database. Now, by applying Euclidean distance method for Eigenface approach,
minimum distance will be calculated. The image with minimum distance from train
database sample with test sample will be identified. And success rate will be counted. In
this system, we have taken five rounds of the above discussed flow of process. Initially,
sample image 1 and 2 of each person were placed in train database and rest of the images
![Page 30: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/30.jpg)
Page 74
(8 images) was placed in test database. In the second round, images 3 and 4 were placed
in train database and compared with rest test samples to identify success rate. Round
three contained sample 5 and 6, round 4 contained samples 7 and 8 and round five
contained samples 9 and 10 as train database. The results of this approach to find best
images suitable to add in train database are shown below.
Results of experiment
Considering a database of 60 sample facial images in Train Database of 30 persons (2 for
each person) and taking database of 240 sample facial images in Test Database of 30
persons (8 for each person)
No. of persons = 30
No. of images in Train Database = 60 (Two for each person)
No. of images in Test Database = 240 (Eight for each person)
Case 1 – Image 1, 2 as train database and rest as test database
Case 2 – Image 3, 4 as train database and rest as test database
Case 3 – Image 5, 6 as train database and rest as test database
Case 4 – Image 7, 8 as train database and rest as test database
Case 5 – Image 9, 10 as train database and rest as test database
Case
number
Image
number Success rate in %
Samples identified successfully
(Out of 240)
1 1,2 63.75 153
2 3,4 57.50 138
3 5,6 74.58 179
4 7,8 70.41 169
5 9,10 62.08 149
Table 3.2: Success rate of five cases
Based on the results shown in table 3.2, the researcher considered sample 5 and 6 as train
dataset and rest of the samples as test dataset.
By considering the above shown operations, 60 face samples have been identified as
train database samples and 30 fingerprint samples have been identified to build db.mat
master database file by extracting minutiae features.
![Page 31: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/31.jpg)
Page 75
3.3 Building unimodal face recognition system
Once preparing database of face and fingerprint samples, and identifying samples for
train database and test database, the next step is to prepare unimodal face and fingerprint
recognition systems. First, the researcher has designed unimodal face recognition model
to identify person based on face features. Here the researcher has used PCA based face
recognition method using Eigenface approach. The original system has been developed
by Amir Hossein Omidvarnia. He has prepared the system by taking the reference from
[12]. The model of face recognition system is shown below:
Figure 3.26: Face recognition system model
Figure 3.26 shows the model of face recognition system. Initially multimodal IDE is
executed and face recognition system option is selected. In that GUI, click the button to
capture face sample. The process of capturing face sample is similar to that mentioned in
section 3.2.4. After that, the sample image will be compared master train database of 60
images (2 images for each person * 30 person). By comparing Euclidean distance
between master database images and sample image, the image with minimum Euclidean
distance will be identified and index of that image will be returned and displayed in GUI.
![Page 32: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/32.jpg)
Page 76
3.3.1 Flow chart representation of face recognition system
The system operations can be described with the flow chart shown in figure 3.27. The
Unimodal and Multimodal IDE have been developed to execute the operations of face
recognition and fingerprint recognition.
Figure 3.27: Flowchart of unimodal face recognition system
Execute Unimodal IDE
and select Face
recognition option
Load test face
sample from
test database
Master face
database
Calculate Euclidean
distance between test
sample and train sample
from master face database
Display person
index with
minimum
Euclidean
distance
End process
Start
Identify the image with
minimum Euclidean
Distance
![Page 33: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/33.jpg)
Page 77
For ease of use to the user, the researcher has developed this IDE. The steps are shown in
flow chart and GUI execution is shown after the flow chart representation. As per the
system design, the user can select test image from database. It is possible to implement
the mechanism of enrolling live sample for face recognition. It is possible to evaluate the
performance without using IDE i.e. with the help of command prompt execution of .m
file.
Based on the flowchart shown in figure 3.27, here are the GUI steps for face recognition
system execution.
3.3.2 GUI representation of face recognition system
Step-1: Load multimodal IDE
Figure 3.28: Multimodal IDE with all components
The first step is to load multimodal IDE. For that, it is required to execute
multimodal_ide.m file. The GUI contains three panels containing different components.
Axes control 1 &
Axes control 2
Edit controls
Menu selection
options
![Page 34: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/34.jpg)
Page 78
The panels are: controls, Test images, Scores and Identity. Based on selection of menu
option, components will be visible on IDE.
Step-2: Select „Unimodal‟ menu option from two menus and from Unimodal menu select
„Face recognition‟ option.
IDE contains two menu options: ‗Unimodal‘ and ‗Multimodal‘. As here is the face
recognition system execution, so ‗Unimodal‘ option requires to be selected. On selecting
this option, two options - ‗Face recognition‘ and ‗Fingerprint recognition‘ will be visible.
Figure 3.29 shows these options.
Step-3: Loading components required for face recognition system
On selection of Unimodal menu option and Face recognition submenu option, the
components required for face recognition system will only be visible to the user. The
components are: Load face image button, Match face button and Clear image button, axes
control 1 to load test face image, and edit control to display recognized person index.
Figure 3.29: Selection of Unimodal menu option and Face recognition submenu option.
![Page 35: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/35.jpg)
Page 79
Figure 3.30: Components for face recognition system
Step-4: Selection of test face image from test database
Once loading components for face recognition system, next step is to press ‗Load face
image‘ button. On pressing this button, ‗Pick an image file‘ dialog box will be visible and
select a test face image with extension .pgm from test databases.
Step-5: Display selected test face image in axes control1 and press „Match face‟ button
After selection of test face image from test database, the system will load this file on axes
control 1. Next, press ‗Match face‘ button to perform face recognition process. The
system will execute PCA based face recognition method using Eigenface approach. The
working of method is shown in section 3.3.2.1.
![Page 36: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/36.jpg)
Page 80
Figure 3.31: Selection of test face image
3.3.2.1 PCA based face recognition system with Eigenfaces [8][9][12]
Principal component analysis transforms a set of data obtained from possibly correlated
variables into a set of values of uncorrelated variables called principal components. The
number of components can be less than or equal to the number of original variables. The
first principal component has the highest possible variance, and each of the succeeding
components has the highest possible variance under the restriction that it has to be
orthogonal to the previous component. It is required to find the principal components;
here eigenvectors of the covariance matrix of facial images.
(1) First step
In the first step, it is required to form a training data set. 2D image can be represented in
1D vector with concatenation of rows. Image will be transformed in a vector with length
N=m*n.
![Page 37: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/37.jpg)
Page 81
I=
x11 x12 … x1n
x21 x22 … x2n
⋮ ⋮ ⋱ ⋮xm1 xm2 … xmn
x11
⋮x1n
⋮x2n
⋮xmn
=x
Create a matrix of learning images X with M vectors of length N. Then center the matrix.
Determine vector of mean values and subtract that vector from each image vector.
Average vectors are arranged to form a new training vector with size (NxM).
(2) Second step
Second step will calculate covariance matrix C, and find its eigenvectors and eigenvalues.
Covariance vector C will have dimension NxN. From this matrix, we can get N
eigenvectors and eigenvalues. Rank of covariance matrix is limited by number of images
in learning set. Eigenvector associated with highest eigenvalue reflects the highest
variance and one associated with the lowest eigenvalue, the smallest variance.
The vectors should be sorted by eigenvalues so that first vector corresponds to the highest
eigenvalue. These vectors will be normalized next. This will create a new matrix where
each vector is a column vector. The dimension of this matrix will be NxD, where D
represents desired number of eigenvectors. Each original image can be reconstructed by
adding mean image to weighted sum of all vectors.
(3) Third step
Third and last step is the recognition of faces. Image of the person required to find in
training database is transformed into a vector P, reduced by the mean value and projected
with a matrix of eigenvectors (Eigenfaces).
Classification is done by identifying distance person‘s eigenvector and each vector of
matrix Y. Euclidean distance is the most common method. Other methods can also be
applied.
If the minimum distance between test face and training faces is higher than threshold
value, then person will be unknown otherwise it will be known one.
mxn
CONCATENATION
1xn
![Page 38: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/38.jpg)
Page 82
Step-6: Displaying identity of person in edit control
Figure 3.32: Display identity of person with person index in edit control
As shown in step-5, system will execute PCA based face recognition method using
Eigenface approach and identify the index of person having minimum Euclidean distance
with test face sample. After this identification, next step is to click on ‗Clear all‘ button to
clear the content of axes control and edit control.
3.3.3 Performance evaluation of face recognition system
For performance measurement of this unimodal biometric technique, the researcher has
taken three cases.
1. By considering average distance of 8 test samples
2. By considering minimum distance of 8 test samples
3. By considering maximum distance of 8 test samples
Every train database sample is compared with 60 train database samples and calculated
Euclidean distance. This process is done for all 8 test database samples and from that
![Page 39: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/39.jpg)
Page 83
minimum, maximum and average Euclidean distance is calculated and stored in 3
different files with names: min_distance.txt, max_distance.txt and avg_distance.txt. After
writing all these scores in text files, it is required to calculate False Accept Rate (FAR)
and False Reject Rate (FRR) for the test samples. For FRR, test is applied with all 8 test
samples and compared with minimum score, maximum score and average score stored in
respective text files. Based on that, False Reject Rate (FRR) is calculated. The outcomes
of this experiment are given below:
Calculating FRR
Consider a database of 60 samples for training and 240 samples for testing. Here training
set for each person with 2 samples and other 8 samples of the same person compared by
taking different cases.
Case 1: minimum distance from all 8 test samples with 2 train samples
Case 2: maximum distance from all 8 test samples with 2 train samples
Case 3: average distance of all 8 test samples with 2 train samples
The Matlab code for calculating FRR is shown here:
-----------------------------------------------------------------------
% A sample script, which shows the usage of functions, included in % PCA-based face recognition system (Eigenface method) % This program calculates average minimum distance for each person by % considering image 5 and 6 as train sample and rest 8 as test samples.
clear all clc close all
% You can customize and fix initial directory paths TrainDatabase =
'D:\phd_dtm_practical\FaceRecognition_University\Experiment3\Database\
TrainDatabase'; TestDatabase =
'D:\phd_dtm_practical\FaceRecognition_University\Experiment 3\Database\
TestDatabase'; TestDatabase2 =
'D:\phd_dtm_practical\FaceRecognition_University\Experiment3\TestDataba
se';
count=100;
success_min=0; success_max=0; success_avg=0;
frr_min=0;
![Page 40: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/40.jpg)
Page 84
frr_max=0; frr_avg=0;
% Fetching min_distance value from file min_distance = []; max_distance = []; avg_distance = [];
fid=fopen('min_distance.txt','r+'); fid1=fopen('max_distance.txt','r+'); fid2=fopen('avg_distance.txt','r+'); fid3=fopen('falserejection.txt','wt'); % Generate vector of minimum distance for i=1:30 distance1=fscanf(fid,'%f'); min_distance = [min_distance distance1]; end
% Generate vector of maximum distance for i=1:30 distance1=fscanf(fid1,'%f'); max_distance = [max_distance distance1]; end
% Generate vector of average distance for i=1:30 distance1=fscanf(fid2,'%f'); avg_distance = [avg_distance distance1]; end
% Finding FRR for each person based on minimun distance for i=1:30 TrainDatabasepath=strcat(TrainDatabase,int2str(i)); T = CreateDatabase(TrainDatabasepath,i); [m, A, Eigenfaces] = EigenfaceCore(T); TestDatabasepath = strcat(TestDatabase,int2str(i)); for j = 1 : 8 num=count+i; str=strcat(int2str(num),'_',int2str(j)); TestImage = strcat(TestDatabasepath,'\',str,'.pgm'); im = imread(TestImage); min_dist = Recognition(TestImage, m, A, Eigenfaces); %fprintf([num2str(min_dist) '\t' num2str(min_distance(i))
'\n']); if(min_dist>min_distance(i)) frr_min=frr_min+1; else success_min=success_min+1; end end end
disp(frr_min); disp(success_min);
![Page 41: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/41.jpg)
Page 85
fprintf(fid3,'%f \t %f',frr_min,success_min);
% Finding FRR for each person based on maximum distance for i=1:30 TrainDatabasepath=strcat(TrainDatabase,int2str(i)); T = CreateDatabase(TrainDatabasepath,i); [m, A, Eigenfaces] = EigenfaceCore(T); TestDatabasepath = strcat(TestDatabase,int2str(i)); for j = 1 : 8 num=count+i; str=strcat(int2str(num),'_',int2str(j)); TestImage = strcat(TestDatabasepath,'\',str,'.pgm'); im = imread(TestImage); min_dist = Recognition(TestImage, m, A, Eigenfaces); %fprintf([num2str(min_dist) '\t' num2str(min_distance(i))
'\n']);
if(min_dist>max_distance(i)) frr_max=frr_max+1; else success_max=success_max+1; end end end
disp(frr_max); disp(success_max);
fprintf(fid3,'%f \t %f',frr_max,success_max);
% Finding FRR for each person based on average distance for i=1:30 TrainDatabasepath=strcat(TrainDatabase,int2str(i)); T = CreateDatabase(TrainDatabasepath,i); [m, A, Eigenfaces] = EigenfaceCore(T); TestDatabasepath = strcat(TestDatabase,int2str(i)); for j = 1 : 8 num=count+i; str=strcat(int2str(num),'_',int2str(j)); TestImage = strcat(TestDatabasepath,'\',str,'.pgm'); im = imread(TestImage); min_dist = Recognition(TestImage, m, A, Eigenfaces); %fprintf([num2str(min_dist) '\t' num2str(min_distance(i))
'\n']);
if(min_dist>avg_distance(i)) frr_avg=frr_avg+1; else success_avg=success_avg+1; end end end
![Page 42: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/42.jpg)
Page 86
disp(frr_avg); disp(success_avg);
fprintf(fid3,'%f \t %f',frr_avg,success_avg);
fclose(fid); fclose(fid1); fclose(fid2); fclose(fid3); -----------------------------------------------------------------------
Results of calculating FRR
The results of experiment of calculation of FRR are shown in table 3.3 and graphical
representation is shown in figure 3.33.
Case No. of train
samples
No. of
test
samples
Genuine
acceptance
False
rejection GAR FRR
min_dist 60 240 26 214 10.83% 89.16%
max_dist 60 240 142 98 59.16% 40.83%
avg_dist 60 240 240 0 100 0
Table 3.3: FRR calculation for three different cases
Figure 3.33: FRR calculation for facial recognition
Calculating FAR
Consider a database of 60 samples for training. Here training set for each person with 2
samples and other 290 samples of the other persons compared by taking different cases.
0
20
40
60
80
100
120
GAR FRR
min_dist
max_dist
avg_dist
![Page 43: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/43.jpg)
Page 87
Case 1: minimum distance from all 8 test samples with 2 train samples
Case 2: maximum distance from all 8 test samples with 2 train samples
Case 3: average distance of all 8 test samples with 2 train samples
The understanding of comparisons is shown below:
For each person 10 face images are there, so for 30 persons, here 300 images are there.
As we have to calculate FRR, so comparison of facial image of a person will be made
with rest 29 persons‘ images. i.e. facial image of person 101 will be compared with facial
images of person 102 to 130. So, comparison of facial image of person 101 will be there
with 290 other images. The similar comparison is carried out for rest 29 persons. So, total
8700 (290 * 30 persons) comparisons will be made.
The Matlab code for calculating FAR is shown here:
-----------------------------------------------------------------------
% A sample script, which shows the usage of functions, included in PCA-
based face recognition system (Eigenface method) % This program calculates FAR by comparing two samples of one person
with % 290 samples of other 29 persons
clear all clc close all
% You can customize and fix initial directory paths TrainDatabase =
'D:\phd_dtm_practical\FaceRecognition_University\Experiment
3\Database\TrainDatabase'; TestDatabase =
'D:\phd_dtm_practical\FaceRecognition_University\Experiment
3\TestDatabase\TestDatabase';
count=100;
success_min=0; success_max=0; success_avg=0;
far_min=0; far_max=0; far_avg=0;
% Fetching min_distance value from file min_distance = []; max_distance = []; avg_distance = [];
fid=fopen('min_distance.txt','r+'); fid1=fopen('max_distance.txt','r+');
![Page 44: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/44.jpg)
Page 88
fid2=fopen('avg_distance.txt','r+'); fid3=fopen('falseaccept.txt','wt'); % Generate vector of minimum distance for i=1:30 distance1=fscanf(fid,'%f'); min_distance = [min_distance distance1]; end
% Generate vector of maximum distance for i=1:30 distance1=fscanf(fid1,'%f'); max_distance = [max_distance distance1]; end
% Generate vector of average distance for i=1:30 distance1=fscanf(fid2,'%f'); avg_distance = [avg_distance distance1]; end
% Finding FAR for each person based on minimun distance for i=1:30 TrainDatabasepath=strcat(TrainDatabase,int2str(i)); T = CreateDatabase(TrainDatabasepath,i); [m, A, Eigenfaces] = EigenfaceCore(T); for j = 1 : 30 TestDatabasepath = strcat(TestDatabase,int2str(j)); for l=1:10 fprintf(['value of i is:' num2str(i) ' value of j is:'
num2str(j) 'value of l is:' num2str(l) '\n']); if(i==j) continue; else num=count+j; str=strcat(int2str(num),'_',int2str(l)); TestImage = strcat(TestDatabasepath,'\',str,'.pgm'); im = imread(TestImage); min_dist = Recognition(TestImage, m, A, Eigenfaces); if(min_dist<min_distance(i)) far_min=far_min+1; else success_min=success_min+1; end end end end end disp(far_min); disp(success_min);
fprintf(fid3,'%f \t %f \n',far_min,success_min);
% Finding FAR for each person based on maximum distance for i=1:30
![Page 45: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/45.jpg)
Page 89
TrainDatabasepath=strcat(TrainDatabase,int2str(i)); T = CreateDatabase(TrainDatabasepath,i); [m, A, Eigenfaces] = EigenfaceCore(T); for j = 1 : 30 TestDatabasepath = strcat(TestDatabase,int2str(j)); for l=1:10 if(i==j) continue; else num=count+j; str=strcat(int2str(num),'_',int2str(l)); TestImage = strcat(TestDatabasepath,'\',str,'.pgm'); im = imread(TestImage); min_dist = Recognition(TestImage, m, A, Eigenfaces); if(min_dist<max_distance(i)) far_max=far_max+1; else success_max=success_max+1; end end end end end disp(far_max); disp(success_max);
fprintf(fid3,'%f \t %f \n',far_max,success_max);
% Finding FAR for each person based on average distance for i=1:30 TrainDatabasepath=strcat(TrainDatabase,int2str(i)); T = CreateDatabase(TrainDatabasepath,i); [m, A, Eigenfaces] = EigenfaceCore(T); for j = 1 : 30 TestDatabasepath = strcat(TestDatabase,int2str(j)); for l=1:10 if(i==j) continue; else num=count+j; str=strcat(int2str(num),'_',int2str(l)); TestImage = strcat(TestDatabasepath,'\',str,'.pgm'); im = imread(TestImage); min_dist = Recognition(TestImage, m, A, Eigenfaces); if(min_dist<avg_distance(i)) far_avg=far_avg+1; else success_avg=success_avg+1; end end end end end disp(far_avg); disp(success_avg);
![Page 46: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/46.jpg)
Page 90
fprintf(fid3,'%f \t %f \n',far_avg,success_avg);
fclose(fid); fclose(fid1); fclose(fid2); fclose(fid3); -----------------------------------------------------------------------
Results of calculating FAR
The results of experiment of calculation of FRR are shown in table 3.4 and graphical
representation is shown in figure 3.34.
Case False acceptance
(Out of 8700)
Genuine
Rejection
(Out of 8700)
FAR GRR
min_dist 153 8547 1.75% 98.25%
max_dist 6727 1973 77.32% 22.67%
avg_dist 2981 5791 34.26% 66.56%
Table 3.4: FAR calculation for three different cases
Figure 3.34: FAR calculation for facial recognition
The performance of the face recognition system has been shown in table 3.3 and table
3.4. The details discussion and comparison of performance with other system‘s
performance will be there in chapter 6.
0
20
40
60
80
100
120
FAR GRR
min_dist
max_dist
avg_dist
![Page 47: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/47.jpg)
Page 91
3.4 Building unimodal fingerprint recognition system
After preparing database of face and fingerprint samples, and identifying samples for
train database and test database, the next step is to prepare unimodal face and fingerprint
recognition systems. The design of unimodal face recognition model to identify person
based on face features is discussed in section 3.3. Here is the description of unimodal
fingerprint recognition system. The researcher has used minutiae based fingerprint
recognition approach. The original system has been developed by Vahid. K. Alilou of
Department of Computer Engineering from The University of Semnan.. The model of
fingerprint recognition system is shown below in figure 3.35.
Initially multimodal IDE is executed and fingerprint recognition system option is
selected. In that GUI, click the button to capture fingerprint sample. The process of
capturing fingerprint sample is similar to that mentioned in section 3.2.4. After that, the
sample image will be compared master train database (db.mat) of 30 images (1 image for
each person * 30 person). By comparing minutiae point features between master database
images and sample image, the image with maximum score will be identified and index of
that image will be returned and displayed in GUI.
Figure 3.35: Fingerprint recognition system model
![Page 48: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/48.jpg)
Page 92
3.4.1 Flow chart representation of fingerprint recognition system
The system operations can be described with the flow chart shown in figure 3.36. The
steps are shown in flow chart and GUI execution is shown after the flow chart
representation. As per the system design, the user can select test image from database. It
is possible to implement the mechanism of enrolling live sample for fingerprint
recognition. The IDE provides facility for both face and fingerprint recognition under the
same menu option i.e. ‗Unimodal‘.
Figure 3.36: Flowchart of unimodal fingerprint recognition system
Execute Unimodal IDE
and select fingerprint
recognition option
Load test
fingerprint
sample from
test database
Master fingerprint
database db.mat
Identify minutiae
features from test
samples and compare
with db.mat templates
Maximum
matching
score?
Display person
index with
maximum
matching score
End process
Start
![Page 49: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/49.jpg)
Page 93
The details of steps are shown below after representation of flow chart. Based on the
flowchart shown in figure 3.36, here are the GUI steps for fingerprint recognition system
execution.
3.4.2 GUI representation of fingerprint recognition system execution
Step-1: Load multimodal IDE
The first step is to load multimodal IDE. For that, it is required to execute
multimodal_ide.m file. The GUI contains three panels containing different components.
The panels are: controls, Test images, Scores and Identity. Based on selection of menu
option, components will be visible on IDE. See the figure 3.37.
Step-2: Select „Unimodal‟ menu option from two menus and from Unimodal menu select
„Fingerprint recognition‟ option.
IDE contains two menu options: ‗Unimodal‘ and ‗Multimodal‘. As here is the face
recognition system execution, so ‗Unimodal‘ option requires to be selected. On selecting
this option, two options - ‗Face recognition‘ and ‗Fingerprint recognition‘ will be visible.
Figure 3.38 shows these options.
Figure 3.37: Multimodal IDE with all components
![Page 50: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/50.jpg)
Page 94
Figure 3.38: Selection of Unimodal option and Fingerprint recognition submenu option
Step-3: Loading components required for fingerprint recognition system
On selection of Unimodal menu option and Fingerprint recognition submenu option, the
components required for fingerprint recognition system will only be visible to the user.
The components are: Load fingerprint image button, Match fingerprint button and Clear
image button, axes control 2 to load test fingerprint image, and edit control to display
recognized person index. See the figure 3.39.
Step-4: Selection of test fingerprint image from test database
Once loading components for fingerprint recognition system, next step is to press ‗Load
fingerprint image‘ button. On pressing this button, ‗Pick an image file‘ dialog box will be
visible and select a test fingerprint image with extension .jpg from test databases. See
figure 3.40 for detail.
Step-5: Display selected test fingerprint image in axes control2 and press „Match
fingerprint‟ button
After selecting test fingerprint image from test database, the system will load this file on
axes control2. Next, press ‗Match fingerprint‘ button to perform fingerprint recognition
![Page 51: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/51.jpg)
Page 95
process. See figure 3.41 for GUI representation. The system will execute minutiae based
fingerprint recognition approach. The working of method is shown in 3.4.3.
Figure 3.39: Components for fingerprint recognition system
Fingerprint matching algorithms largely follow 3 different classes: correlation-based,
minutiae-based, and non-minutiae feature based matching. Correlation-based matching
involves superimposing 2 fingerprint images together and calculating pixel-wise
correlation for different displacement and rotations. Minutia-based matching uses
extracted minutiae from both fingerprints in order to help perform alignment and retrieve
minutiae pairings between both fingerprint minutiae sets. Minutiae-based matching can
be viewed as a point-pattern matching problem with theoretical roots in pattern
recognition and computer vision. Non-minutiae feature based matching use non-minutiae
features, such as ridge shape, orientation and frequency images in order to perform
alignment and matching. Amongst all algorithm classes, minutiae-based methods are the
most common due to their strict analogy with the way forensic experts compare
![Page 52: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/52.jpg)
Page 96
fingerprints and legal acceptance as a proof of identity in many countries. Minutiae points
are also known to be extremely unique from finger to finger in terms of spatial
distribution, proving to be ideal features for fingerprint matching. Additionally, minutiae
point sets obtain a higher level of uniqueness versus practicality in comparison to other
level types of fingerprint features, such as ridge orientation/frequency images and skin
pores. Here with this research, the researcher has adopted minutiae based fingerprint
recognition method.
Figure 3.40: Selection of test fingerprint sample image from test database
3.4.2.1 Minutiae based fingerprint recognition system [10][11]
The researcher has adopted the work of Vahid Alilou from University of Semnan, Iran.
The open source code has been provided by him. He has developed minutiae based
![Page 53: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/53.jpg)
Page 97
fingerprint recognition system. With minor changes, the researcher has adopted the code.
The graphical representation of the system steps is given below:
The simple explanation is given below:
The complete processing is done in three stages:
1. Preprocessing
2. Feature extraction
3. Matching
Step 1 comprises the following steps:
a. Thinning
b. Binarization Making
c. segmentation mask
d. Image enhancement
Figure 3.41: Loading test fingerprint image in axes control2
Step 2 comprises the following steps:
![Page 54: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/54.jpg)
Page 98
a. Finding minutiae points
b. Filtering false minutiae points
Step 3 comprises the following steps:
a. Loading database
b. Registration
c. Computing matching score
Figure 3.42: Fingerprint recognition system steps
Stage 2 contains the following mechanism:
The algorithm, first, extracts some features from the fingerprint and stores them in a
vector called 'minutiae' which contains the following data: [X, Y, CN, Theta, Flag, 1]
where X, and Y contains the coordination of the a minutiae, Theta is the orientation of the
minutiae. CN is the Crossing Number . {0: Isolated point, 1: Termination Point, 2:
Continuing Ridge Point, 3: Bifurcation Point} .Flag is 0 for permissible minutiae and is 1
for non-permissible one.
The function 'transform', transforms x, y, theta according to the i-th reference point. The
function 'match', finds the best matching using minutiae feature details and computes the
overall similarity score.
![Page 55: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/55.jpg)
Page 99
Step-6: Displaying identity of person in edit control
Figure 3.43: Display identity of the recognized person with edit control
As shown in step-5, system will execute minutiae based fingerprint matching method to
identify best matching score and based on that the person index. The recognized person‘s
index will be shown in edit control. After this identification process, next is to click on
‗Click all‘ button to clear the content of axes control and edit control.
3.4.3 Performance evaluation of fingerprint recognition system
The researcher carried out experiment of fingerprint recognition system by taking 9 test
samples of each person, in these way 270 samples of 30 persons. These samples are
compared with 30 train database samples. This experiment returns GAR and FRR for the
fingerprint recognition system.
Calculating FRR
![Page 56: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/56.jpg)
Page 100
Consider a database of 30 samples for training and 270 samples for testing. Here training
set for each person with 1 sample and other 9 samples of the same person compared by
taking different cases. The Matlab code for calculating FRR is shown here:
-----------------------------------------------------------------------
clear all; clc; addpath(genpath(pwd)); load('db.mat'); GAR=0; FAR=0; fid=fopen('fp_far1.txt','wt'); count=100; for i=9:30 person=count+i; for j=1:9
filename=['D:\phd_dtm_practical\FingerprintRecognition_University\TestD
atabase\' num2str(person) '_' num2str(j) '.jpg']; img=imread(filename); if ndims(img) == 3; img = rgb2gray(img); end % Color Images disp(['Extracting features from ' filename ' ...']); ffnew=ext_finger(img,1); fprintf(['Computing similarity between ' num2str(j) ' and '
'database file 101_1 ' 'from SU2014 :']); str1= [person '_' num2str(j) '.jpg']; score=match(ffnew,ff{i}); if (score >0.50) GAR=GAR+1; else FRR=FRR+1; end fprintf(fid,'%s %f \n',str1,score); end end display(GAR); display(FRR); fprint(fid,'%f \t %f \n',GAR,FRR); fclose(fid);
-----------------------------------------------------------------------
By applying the above given code, the researcher has achieved the following results
shown in table 3.5. Graphical representation is shown in 3.44.
Case Success
(Out of 270)
Failure
(Out of 270) GAR FRR
1 257 13 95.185% 4.815%
Table 3.5: GAR and FRR for fingerprint recognition system
The researcher carried out experiment of fingerprint recognition system by taking 9 test
samples of each person, in these way 270 samples of 30 persons. These samples are
![Page 57: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/57.jpg)
Page 101
compared with 30 train database samples. This experiment returns GAR and FRR for the
fingerprint recognition system.
Figure 3.44: GAR and FRR for fingerprint recognition system
Calculating FAR
The researcher carried out another experiment of fingerprint recognition system by taking
90 test samples of 30 persons. These samples are compared with 30 train database
samples. This experiment returns GRR and FAR for the fingerprint recognition system.
Consider a database of 30 samples for training and 90 samples for testing. Here training
set for each person with 1 sample and other 3 samples of the different persons compared.
The Matlab code for calculating FAR is shown here:
-----------------------------------------------------------------------
clear all; clc; addpath(genpath(pwd)); load('db.mat'); GRR=0; FAR=0; fid=fopen('fp_far.txt','wt'); count=100; for i=1:30 str=[ 'comparison with' count+i '.jpg' ]; fprintf(fid,'%s \n',str); for j=1:3 person=count+i+j; if person>130 person=person-30; end
filename=['D:\phd_dtm_practical\FingerprintRecognition_University\TestD
atabase\' num2str(person) '_' num2str(j) '.jpg']; img=imread(filename); if ndims(img) == 3; img = rgb2gray(img); end % Color Images disp(['Extracting features from ' filename ' ...']); ffnew=ext_finger(img,1);
0
20
40
60
80
100
GAR FRR
Series1
![Page 58: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/58.jpg)
Page 102
fprintf(['Computing similarity between ' num2str(j) ' and '
'database file' num2str(i) 'from SU2014 :']); str1= [person '_' num2str(j) '.jpg']; score=match(ffnew,ff{i}); if (score >0.48) FAR=FAR+1; else GRR=GRR+1; end fprintf(fid,'%s %f \n',str1,score); end end display(GRR); display(FAR); fprint(fid,'%d \t %d \n',GRR,FAR); fclose(fid); -----------------------------------------------------------------------
By applying the above given code, the researcher has achieved the following results
shown in table 3.6. Graphical representation is shown in 3.45.
Table 3.6: GRR and FAR for fingerprint recognition system
Figure 3.45: GRR and FAR for fingerprint recognition system
From the experiments, we are sure about performance of fingerprint recognition system.
The success rate (GAR) of fingerprint recognition system is 95.18%. Similarly it has
acceptable FAR. But these results are in standard conditions. If there are some problems
like oily skin, scars and dirt on the finger skin, the results will be poorer.
0
20
40
60
80
100
GRR FAR
Series1
Case Success
(Out of 90)
Failure
(Out of 90) GRR FAR
1 88 2 97.77% 2.23%
![Page 59: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/59.jpg)
Page 103
3.5 Conclusion of prototyping unimodal biometric systems
From the experiments carried out with facial and fingerprint recognition system, we are
able to conclude the following results.
Sr. no. Biometric system type GAR FRR GRR FAR
1 Face recognition (min_dist) 10.83% 89.16% 1.75% 98.25%
2 Face recognition (max_dist) 59.16% 40.83% 77.32% 22.67%
3 Face recognition (avg_dist) 100% 0% 34.26% 66.56%
4 Fingerprint recognition 95.185% 4.815% 97.77% 2.23%
Table 3.7: Performance comparison of facial and fingerprint recognition systems
With these experiments, we can say that fingerprint recognition system can give
acceptable performance under standard conditions. But in case of noise like oily skins,
scars, dirt and in case of injury, it may not be able to give acceptable performance. At the
same time, facial recognition system alone is not capable to give acceptable performance.
In this situation, we can say that we require considering more than one modality instead
of only one biometric trait or unimodal biometric system. To get optimum performance
of the system, we should use multimodal system with more than one modality.
References
[1] www.explainthatstuff.com
[2] www.biometricupdate.com
[3]Peter Gregory, Michael Simon, ―Biometrics for Dummies‖, Wiley publishing, Inc,
2008
[4]Anil Jain, Ruud Bole, Sharath Pankanti, ―Biometrics – Personal identification in
networked society‖, Kluwer academic publishers, 2002
[5]Samir Nanavati, Michael Thieme, Raj Nanavati, ―Biometrics – Identity verification in
a networked world‖, Wiley computer publishing, 2002
[6] Arun Ross, Karthik Nandkumar, Anil Jain, ―Handbook of Multibiometrics‖, Springer,
2006
[7] Ratha, N. K., Connell, J. H., and Bolle, R. M., ―An Analysis of Minutiae Matching
Strength‖, In Proceedings of Third International Conference on Audio- and Video-Based
Biometric Person Authentication (AVBPA), pages 223-228, Halmstad, Sweden., 2001
![Page 60: CHAPTER 3 STUDY OF EXISTING UNIMODAL BIOMETRIC …shodhganga.inflibnet.ac.in/bitstream/10603/42855/10/10_chapter 3.pdf · Biometric techniques which are using single traits for identification](https://reader034.vdocuments.us/reader034/viewer/2022043004/5f8522a74de94233f42d87bf/html5/thumbnails/60.jpg)
Page 104
[8] M. Turk, A. Pentland: Face Recognition using Eigenfaces, Conference on Computer
Vision and Pattern Recognition, 3 – 6 June 1991, Maui, HI , USA, pp. 586 – 591.
[9] Marijeta Slavković, Dubravka Jevtić, ‖Face recognition using Eigenface approach‖,
Serbial Journal of Electrical Engineering, Vol. 9, no. 1, 2012, pp. 121-130
[10] Joshua Abraham, Paul Kwan and Junbin Gao, ―Fingerprint Matching using A
Hybrid Shape and Orientation Descriptor‖, State of the art in Biometrics, Dr. Jucheng
Yang (Ed.), ISBN: 978-953-307-489-4, InTech, 2011.
[11] Vahid K. Alilou, Fingerprint matching – A simple approach, www.mathworks.com
[12] P. N. Belhumeur, J. Hespanha, and D. J. Kriegman. Eigenfaces vs. Fisherfaces:
Recognition using class specific linear projection. In ECCV (1), pages 45--58, 1996.
[13] John Woodword, Nicholas Orlans, Peter Higgins,‖ Biometrics : The ultimate
reference‖, Wiley India Publications