ieee hst 2009
DESCRIPTION
Biometric sensor image fusion for identity verification: A case study with wavelet-based fusion rules and graph matchingTRANSCRIPT
Biometric Sensor Image Fusion for Identity Verification: A Case Study with Wavelet-based Fusion Rules
and Graph Matching
Authors*Dakshina Ranjan Kisku, Ajita Rattani, Phalguni
Gupta, Jamuna Kanta Sing.
Presented by: Dakshina Ranjan Kisku*Contact person: [email protected]
Agenda:
Introduction Biometrics systems Modality based categorization and fusion levels in
multibiometrics Wavelet decomposition and biometrics image
fusion Fusion rules applied Overview of SIFT features Graph matching technique and verification Experimental results Conclusion Bibliography
Introduction:
Biometrics sensor image fusion refers to a process that fuses multispectral images captured at different resolutions and by different biometric sensors to acquire richer and complementary information to produce a new fused image in spatially enhanced form.
The fused image depicts spatially enhanced information of one or more biometric characteristics that is more understandable for human perception.
Biometrics image fusion at higher abstraction level (i.e., low-level) removes several inconsistencies, less relevant edge artifacts and noise in the fused images.
Biometrics systems:
In Computer vision and Machine vision applications,
Biometric can be thought as a automatic identity
verification system, where a user automatically
recognizes by his/her physiological or behavioral
characteristics.
Biometrics can be used for establishing identity of a
person and identity can be defined as follows: Identity -quality or condition of being the same in
substance, composition, nature, properties, or in particular qualities under consideration (Oxford English Dictionary, 2004)
Contd…Biometrics systems
People are identified by three basic means:
Something you have (passport, Voter ID card, Driving license, etc.)
Something you know (password, PIN, etc)
Something you are (human body)
Contd…Biometrics systems
Means of identity verification can be
divided into three groups: Possessions-based (credit card, smart card)
- something you have
Knowledge-based (password, PIN)
- something you know
Biometrics-based (biometrics characteristics)
- something you are
Modality based categorization:
Modality based categorization of the biometricsystems can be made on the basis of biometrictraits used. Uni-biometric systems: when a single biometric
system uses for verification or identification of acquired biometrics characteristic, it is called uni-biometrics system (face, fingerprint, palmprint, etc.).
Multi-biometric systems: when more than one biometric traits use for identification or verification by fusion of those traits, then it is called multimodal biometrics (face and fingerprint, face and iris, etc).
Fusion levels in multibiometrics:
Various levels of fusion in multibiometrics: Feature level fusion
- face and fingerprint, face and iris biometrics, etc. Match score level fusion
- face and voice, face and fingerprint, etc. Rank level fusion
- face and fingerprint, etc. Decision level fusion
- face and voice, etc. Sensor level fusion (proposed)
- face and palmprint, fusion of gray and thermogram image, etc.
Wavelet decomposition and biometrics image fusion:
Multisensor biometrics image fusion performs with face and palmprint images and the fused image represents a unique pattern.
Wavelet decomposition can be applied to face and palmprint images independently that decomposes them into multiple channels depending on their local frequency.
The wavelet transform provides an integrated framework to decompose biometric images into a number of new images, each of them having a different degree of resolution.
Contd…wavelet decomposition
Prior to image fusion, wavelet transforms are determined from face and palmprint images.
The wavelet transform contains low-high bands, high-low bands and high-high bands at different scales including the low-low bands of the images at coarse level.
The low-low band has all the positive transform values and remaining bands have transform values which are fluctuating around zeros.
Wavelet transform decomposes an image recursively into several frequency levels and each
level contains transform values.
Contd…wavelet decomposition
Sub-image sequences are then fused by applying different wavelet fusion rules on the low and high frequency parts.
Finally, inverse wavelet transformation is
performed to restore the fused image.
Contd…
Fig. 1. A generic structure of wavelet based fusion approach is shown.
Fusion rules applied:
The input images are decomposed by a discrete wavelet transform (DWT) and the wavelet coefficients are then selected using a set of fusion rules.
- “Maximum” wavelet fusion rule: maximum wavelet coefficients are selected during any
decomposition - “Mean” fusion rule: mean wavelet coefficients are determined.
- “Up-down” fusion rule
- “Down-up” fusion rule
Contd…
Fig. 2. Haar wavelet based fusion of face and palmprint is shown where the “Maximum” fusion rule is applied.
Contd…
Fig. 3. Haar wavelet based fusion is presented for face and palmprint images, where “Mean” fusion rule is applied.
Contd…
Fig. 4. Haar wavelet based fusion is presented, where Up-Down fusion rule is applied.
Contd…
Fig. 5. Haar wavelet based fusion scheme is shown, where Down-Up fusion is applied.
Overview of SIFT features:
The scale invariant feature transform (SIFT) has been proposed by david Lowe[6] and proved to be invariant to image rotation, scaling, partly illumination changes and the camera view.
Local keypoints are detected from the following steps:- select candidates for feature points by searching peaks in the scale-space from a difference of Gaussian (DoG) function.- localized feature points using the measurement of their stability.- assign orientations based on local image properties.
Contd…
- calculate feature descriptors
50 100 150 200
50
100
150
200
Fig. 6. SIFT feature extraction from fused image is shown.
Graph matching technique and verification:
Subject to interpretation of fused image with keypoint descriptors, attributed probabilistic graph G={N, E, K, ζ} is considered for representation.
Where, N and E denote the nodes and edges, respectively, and K and are attributes associated with nodes and edges in the graph.
The nodes correspond to fused image primitives, such as keypoint descriptor and edges link between these nodes.
The authentication then becomes a problem of graph matching corresponds to a pair of fused image, where the probe fused image is matched with the gallery fused image.
Contd…
Based on gallery model graph searching is initiated for the matched maximized posteriori probabilities in probe graph.
Let us consider, to measure the similarity of nodes and edges for a pair of graphs drawn on fused images, two graphs are taken as G’={N’, E’, K’, ζ’} and G”={N”, E”, K”, ζ”}.
Thus for the node, we are searching the most probable label or node in the probe graph. Hence, it can be stated as
)'','',','|(maxarg' ''
'''', KKPn j
j
ni
Nnji
Contd…
n
qn Nnq
iqiqn
ijijn
ijn
QP
QPP
,
.
.ˆ
jn
qn
ip Nnq
pqn
ipjqe
Nnp
ijn
ij PspQ,'',
.
Contd…
ijn
ijn
NnjNniPP
ji
ˆmax'''',,'',
Hence, the matching between a pair of graphs is established by using the posteriori probabilities and assigning the labels from the gallery graph to the points on the probe graph.
Experimental results:
The experiment is conducted on IITK multimodal database of face and palmprint images the multimodal database consists of 400 face images and 400 palmprint images of 200 individuals.
In these evidence fusion, different wavelet fusion rules are applied, namely, ‘maximum’, ‘UD’, ‘DU’ and “mean” fusion rules.
Multisensor biometric fusion based on ‘maximum’ fusion rule produces 98.81% accuracy, while biometric fusion based on ‘mean’ fusion rule, fusion based on ‘DU’ fusion rule, and fusion based on ‘UD’ fusion rule produce 97.43%, 96.27% and 89.93% accuracies, respectively, as shown in the ROC curve.
Contd…
10-1
100
0.4
0.5
0.6
0.7
0.8
0.9
1
<--- False Accept Rate --->
<--
- T
rue
Pos
itiv
e R
ate
--->
ROC curve
Maximum Fusion Rule
DU Fusion RuleUD Fusion Rule
Mean Fusion Rule
Figure. Performances are shown through ROC curves determined from different wavelet based fusion techniques. The fusion rules are – “Down-up (DU)” wavelet fusion rule, “Maximum” wavelet fusion rule, “Mean” wavelet fusion rule and “Up-down (UD)” wavelet fusion rule
Conclusion:
In this paper, multisensor biometric image fusion scheme has been addressed for multibiometric user authentication.
The proposed technique efficiently minimizes the less irrelevant distinct variability and inconsistencies exist in the different biometric modalities and their characteristics by performing fusion of biometrics images at low-level.
The result shows that the proposed method exploits at the sensor level is robust, computationally efficient and less sensitive to unwanted noise, which confirms the validity and efficacy of the system
Bibliography:
A. K. Jain, and A. K. Ross, “Multibiometric systems,” Communications of the ACM, vol. 47, no.1, pp. 34 - 40, 2004.
A. Ross, A. K. Jain, and J. K. Qian, “Information fusion in biometrics,” Pattern Recognition Letters, vol. 24, no. 13, pp. 2115 – 2125, 2003.
A. Ross, and R. Govindarajan, "Feature level fusion using hand and face biometrics," Proceedings of SPIE Conference on Biometric Technology for Human Identification II, 2005, pp. 196 – 204.
T. Stathaki.: “Image fusion – algorithms and applications,” Academic Press, 2008.
http://www.eecs.lehigh.edu/SPCRL/IF/image_fusion.htm D. G. Lowe, “Distinctive image features from scale invariant
keypoints,” International Journal of Computer Vision, vol. 60, no. 2, 2004.
U. Park, S. Pankanti, and A. K. Jain, "Fingerprint verification using SIFT features," Proceedings of SPIE Defense and Security Symposium, 2008.
A. Rattani, D. R. Kisku, M. Bicego, and M. Tistarelli, “Robust feature-level multibiometric classification,” Proceedings of the Biometric Consortium Conference – A special issue in Biometrics, pp. 1- 6, 2006.
Bibliography:
D. R. Kisku, A. Rattani, E. Grosso, and M. Tistarelli, “Face identification by SIFT-based complete graph topology”, Proceedings of the IEEE Workshop on Automatic Identification Advanced Technologies, 2007, pp. 63 – 68.
H. Yaghi, and H. Krim, “Probabilistic graph matching by canonical decomposition”, Proceedings of the International Conference on Image Processing, 2008, pp. 2368 – 2371.
R. Sitaraman, and A. Rosenfield, “Probabilistic analysis of two stage matching”, Pattern Recognition, vol. 22, no. 3, pp. 331 – 343, 1989.
L. S. Davis, “Shape matching using relaxation techniques,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 1, no. 1, pp. 60-72, Jan. 1979.
A. Rattani, D. R. Kisku, M. Bicego, and M. Tistarelli, “Feature level fusion of face and fingerprint biometrics”, Proceedings of the Biometrics: Theory, Applications and Systems, 2007.
C. Hsu, and R. Beuker, “Multiresolution feature-based image registration”, Proceedings of the Visual Communications and Image Processing, 2000, pp. 1 – 9.
A. K. Jain, A. Ross, and S. Prabhakar, “An introduction to biometrics recognition”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 14, no. 1, pp. 4 – 20, 2004.
A. K. Jain, A. Ross, and S. Pankanti, “Biometrics: A tool for information security”, IEEE Transactions on Information Forensics and Security, vol. 1, no. 2, pp. 125 – 143, 2006.
Questions ???
Contact person: