submitted to mathematical problems in …downloads.hindawi.com/journals/mpe/aip/3025264.pdf ·...

14
Research Article Patch Based Collaborative Representation with Gabor Feature and Measurement Matrix for Face Recognition Zhengyuan Xu, 1 Yu Liu, 2 Mingquan Ye, 3 Lei Huang, 1 Hao Yu, 4 and Xun Chen 5 1 Department of Medical Engineering, Wannan Medical College, Wuhu 241002, China 2 Department of Biomedical Engineering, Hefei University of Technology, Hefei 230009, China 3 School of Medical Information and Research Center of Health Big Data Mining and Applications, Wannan Medical College, Wuhu 241002, China 4 Anhui Province Key Laboratory of Active Biological Macro-Molecules Research, Central Laboratory of Wannan Medical College, Wuhu 241002, China 5 Department of Electronic Science and Technology, University of Science and Technology of China, Hefei 230027, China Correspondence should be addressed to Xun Chen; [email protected] Received 13 December 2017; Accepted 29 March 2018; Published 10 May 2018 Academic Editor: Nibaldo Rodr´ ıguez Copyright © 2018 Zhengyuan Xu et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. In recent years, sparse representation based classification (SRC) has emerged as a popular technique in face recognition. Traditional SRC focuses on the role of the 1 -norm but ignores the impact of collaborative representation (CR), which employs all the training examples over all the classes to represent a test sample. Due to issues like expression, illumination, pose, and small sample size, face recognition still remains as a challenging problem. In this paper, we proposed a patch based collaborative representation method for face recognition via Gabor feature and measurement matrix. Using patch based collaborative representation, this method can solve the problem of the lack of accuracy for the linear representation of the small sample size. Compared with holistic features, the multiscale and multidirection Gabor feature shows more robustness. e usage of measurement matrix can reduce large data volume caused by Gabor feature. e experimental results on several popular face databases including Extended Yale B, CMU PIE, and LFW indicated that the proposed method is more competitive in robustness and accuracy than conventional SR and CR based methods. 1. Introduction Face recognition (FR) is one of the most classical and chal- lenging problems in pattern classification, computer vision, and machine learning [1]. Although face recognition tech- nology has made a series of achievements, it still confronts many challenges caused by the variations of illumination, pose, facial expression, and noise in real-world [2, 3]. In real applications, the small sample size problem of FR is also a more difficult issue due to the limitations in availability of training samples. In terms of classification schemes, several widespread pattern classification methods are used in FR. Generally, there are two types of pattern classification methods [4, 5]: parametric methods and nonparametric methods. Paramet- ric methods such as support vector machine (SVM) [6, 7] center on how to learn the parameters of a hypothesis classification model from the training samples and then use them to identify the class labels of test samples. In contrast, the nonparametric methods, such as nearest neighbor (NN) [8] and nearest subspace (NS) [9], use the training samples directly to identify the class labels of test samples. Recent works have revealed an advantage by the nonparametric methods over the parametric methods [4, 10, 11]. e distance based classifiers are widely used in nonparametric methods for FR [11], such as the nearest subspace classifier (NSC) [12]. A key issue in distance based nonparametric classifiers is how to represent the test sample [4]. Recently, Wright et al. pioneered by using the SRC for robust FR [13]. First the test sample was sparsely coded by the training samples; then the class labels of test samples were identified by choosing which class yields the smallest coding error. Although sparse Hindawi Mathematical Problems in Engineering Volume 2018, Article ID 3025264, 13 pages https://doi.org/10.1155/2018/3025264

Upload: trinhhanh

Post on 21-Apr-2018

215 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: SUBMITTED TO MATHEMATICAL PROBLEMS IN …downloads.hindawi.com/journals/mpe/aip/3025264.pdf · SUBMITTED TO MATHEMATICAL PROBLEMS IN ENGINEERING 1 Patch based Collaborative Representation

Research ArticlePatch Based Collaborative Representation with Gabor Featureand Measurement Matrix for Face Recognition

Zhengyuan Xu1 Yu Liu2 Mingquan Ye3 Lei Huang1 Hao Yu4 and Xun Chen 5

1Department of Medical Engineering Wannan Medical College Wuhu 241002 China2Department of Biomedical Engineering Hefei University of Technology Hefei 230009 China3School of Medical Information and Research Center of Health Big Data Mining and Applications Wannan Medical CollegeWuhu 241002 China4Anhui Province Key Laboratory of Active Biological Macro-Molecules Research Central Laboratory of Wannan Medical CollegeWuhu 241002 China5Department of Electronic Science and Technology University of Science and Technology of China Hefei 230027 China

Correspondence should be addressed to Xun Chen xuncheneceubcca

Received 13 December 2017 Accepted 29 March 2018 Published 10 May 2018

Academic Editor Nibaldo Rodrıguez

Copyright copy 2018 Zhengyuan Xu et al This is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

In recent years sparse representation based classification (SRC) has emerged as a popular technique in face recognition TraditionalSRC focuses on the role of the 1198971-norm but ignores the impact of collaborative representation (CR) which employs all the trainingexamples over all the classes to represent a test sample Due to issues like expression illumination pose and small sample size facerecognition still remains as a challenging problem In this paper we proposed a patch based collaborative representation methodfor face recognition via Gabor feature and measurement matrix Using patch based collaborative representation this method cansolve the problem of the lack of accuracy for the linear representation of the small sample size Compared with holistic featuresthe multiscale and multidirection Gabor feature shows more robustness The usage of measurement matrix can reduce large datavolume caused by Gabor featureThe experimental results on several popular face databases including Extended Yale B CMU PIEand LFW indicated that the proposed method is more competitive in robustness and accuracy than conventional SR and CR basedmethods

1 Introduction

Face recognition (FR) is one of the most classical and chal-lenging problems in pattern classification computer visionand machine learning [1] Although face recognition tech-nology has made a series of achievements it still confrontsmany challenges caused by the variations of illuminationpose facial expression and noise in real-world [2 3] In realapplications the small sample size problem of FR is also amore difficult issue due to the limitations in availability oftraining samples

In terms of classification schemes several widespreadpattern classification methods are used in FR Generallythere are two types of pattern classification methods [4 5]parametric methods and nonparametric methods Paramet-ric methods such as support vector machine (SVM) [6 7]

center on how to learn the parameters of a hypothesisclassification model from the training samples and then usethem to identify the class labels of test samples In contrastthe nonparametric methods such as nearest neighbor (NN)[8] and nearest subspace (NS) [9] use the training samplesdirectly to identify the class labels of test samples Recentworks have revealed an advantage by the nonparametricmethods over the parametricmethods [4 10 11]The distancebased classifiers are widely used in nonparametric methodsfor FR [11] such as the nearest subspace classifier (NSC)[12] A key issue in distance based nonparametric classifiersis how to represent the test sample [4] Recently Wright etal pioneered by using the SRC for robust FR [13] First thetest sample was sparsely coded by the training samples thenthe class labels of test samples were identified by choosingwhich class yields the smallest coding error Although sparse

HindawiMathematical Problems in EngineeringVolume 2018 Article ID 3025264 13 pageshttpsdoiorg10115520183025264

2 Mathematical Problems in Engineering

representation related methods [13 14] achieved a greatsuccess in FR those methods focus on the role of the 1198971-norm while ignoring the role of CR [15] which uses allclasses of training samples to represent the target test sampleIn this study [16] Zhu et al argued that both SRC andcollaborative representation based classifier (CRC) sufferserious performance degradation when the training samplesize is very small because the test sample cannot be wellrepresented In order to solve the small sample size problemthey proposed to conduct CRC on the patch and named it thepatch based CRC (PCRC)

The PCRC and some related works [17 18] have demon-strated their effects on small sample size problem of FRhowever some key issues remain to be further optimizedOn one hand all the PCRC related works used the origi-nal face feature but the original feature cannot effectivelyhandle the variations of illumination pose facial expressionand noise [19] On the other hand the data redundancyexisting in these methods leads to poor performance inclassification accuracy and computational cost The facefeature problem has been noticed by some recent works inwhich an efficient and effective image representation hasbeen proposed by using local and holistic features TheEigenface [20 21] Randomface [22] and Fisherface [21]are all classical holistic features [23] but some other worksargued that those holistic features can be easily affected byvariables such as illumination pose facial expression andnoise Therefore they introduced some local features suchas LBP [24] and Gabor filter [25 26] Gabor filter has beensuccessfully and widely used in FR [26 27] Gabor featurecould effectively extract the face local features at multiplescales and multiple directions however this may lead to asharp rise in data volumes [28] The key to solving the largedata volume problem is dimensionality reduction Numerousdimensionality reduction methods have been put forwardto find projections that better separate the classes in low-dimensional spaces among which linear subspace analysismethod has received more and more attention owing toits good properties including principal component analysis(PCA) [28] linear discriminant analysis (LDA) [29] andindependent component correlation algorithm (ICA) [30] Alot of works showed that PCA has the optimal performancein FR [31ndash33]

In this paper we first attempted to alleviate the influenceof unreliability environment on small sample size in FRtherefore we proposed to use Gabor feature and applied itto PCRC Then to improve the computational efficiency ofGPCRC we proposed to use PCA for dimension reductionand then use the measurement matrix including RandomGaussian matrices [34] Toplitz and cyclic matrices [35] andDeterministic sparse Toeplitz matrices (Δ = 4 8 16) [36]to reduce the dimension of the transformed signal and usedthe low-dimensional data to accurately represent the faceThe experimental results showed that the GPCRC and itsimproved methods are effective

Section 2 briefly reviewed SRC and CRC Section 3described the proposed GPCRC and its improved methodsSection 4 illustrated the experiments and the results AndSection 5 concluded the paper

2 Related Works

21 Sparse Representation Based Classification Recently SRCwas first reported by Wright et al [13] for robust FR In SRClet x119894 isin R119898times119899 denote the 119894-th face dataset and each columnof x119894 is a sample of the face of the 119894-th individual Assumingthere are 119896 classes of face samples let X = [x1 x2 x119896]When identifying a target face test sample y asymp X is usedfor coding = [1 2 119896] and the coefficient vector 119894is the encoding of the 119894-th individual sample If y is from the119894-th class then y asymp x119894119894 is the best reservation which meansthat a large number of coefficients are close to zero in 119896 (119896 =119894) only 119894 remains intact Thus the classification (ID) of thetarget face test sample y can be decoded by the sparse nonzerocoefficient in

The SRC methods [13] are summarized as follows

Step 1 Given 119896-class face training sample X and test sampley

Step 2 (dimension adjustment) X y are projected onto thecorresponding low-dimensional feature space X y using thetraditional dimensionality (PCA) reduction technique

Step 3 (1198972-norm) Obtain the normalized X column and yrespectively

Step 4 Solve the 1198971-minimization problem() = argmin1205721 satisfies X120572 = y or X120572 minus y2 lt 120576

Step 5 Compute the residuals to identify the following119903119894(y) = yminus x1198941198942 ID(y) = argmin119894[119903119894(y)] 119894 = 1 2 11989622 Collaborative Representation Based Classification Zhanget al [15] argued that it is the CR but not the 1198971-normsparsity that makes SRC powerful for face classificationCollaborative representation uses all classes (individuals) oftraining samples to label the target test samples In order toreduce the complexity of face detection by X-coordinate aregularized least-squares method is proposed by Zhang et alwhich is

() = arg min120588

1003817100381710038171003817y minus X sdot 120588100381710038171003817100381722 + 120582 1003817100381710038171003817120588100381710038171003817100381722 (1)

where 120582 is a regularization parameter Equation (1) has dualroles first of which is stabilizing the least-squares methodand secondly it proposes a ldquosparsityrdquo which is much weakerthan 1198971-norm to solve The CR for the regularized least-squares method of (1) can be solved as follows

= (X119879X + 120582 sdot I)minus1 X119879y (2)

Let P = (X119879X + 120582 sdot I)minus1X119879 Obviously since P and y havelittle relevance they can be precalculated as a projectionmatrix When a target test sample y comes in to be identifiedy is projected onto P by Py thus making CR very fastThe classification by is very similar to the classificationby in the SRC method In addition to representing theclassification residuals y minus x1198941198942 119894 is a coefficient vector

Mathematical Problems in Engineering 3

Test sample y

Patches

Col

labo

rativ

e rep

rese

ntat

ion

y1

y2

yj

ys

middotmiddotmiddot

j=

LAGCH

儩 儩 儩 儩 儩yjminusM

j j

2 2+

j 2 2

Figure 1 Patch based collaborative representation

associated with class 119894 the 1198972-norm ldquosparsityrdquo 1198942 alsocontains abundant information for classification The CRCmethod [15] is summarized as follows

Step 1 Give 119896 classes of face training sampleX and test sampley

Step 2 (reduce the dimension) Use PCA to reduce X and yto low-dimensional feature space and obtain X and y

Step 3 (normalization) Normalize the columns of X and yusing the unit 1198972-norm

Step 4 Encode y on XP = (X119879X + 120582 sdot I)minus1X119879 = Py

Step 5 Compute the regularization for identification119903119894(y) = y minus x11989411989421198942 ID(y) = argmin119894[119903119894(y)] 119894 =1 2 11989623 Patch Based Collaborative Representation In the equa-tion of sparse representation and collaborative representa-tion it can be seen that if the linear system determinedby the training dictionary X is underdetermined the linearrepresentation of the target test sample y over X can be veryaccurate but in reality available samples in each target arelimited the sparse representation and the cooperative repre-sentation method may fail because the linear representationof the target test sample may not be accurate enough Inorder to alleviate this problem Zhu et al proposed a PCRCmethod for FR [16] as shown in Figure 1 the target facetest sample y is divided into a set of overlapping face imagepatches y1 y2 y119904 according to the patch size Each of thedivided face image patches y119895 is collaboratively representedon the local dictionary M119895 at the corresponding positionof the patch y119895 extracted from X In this case since thelinear system determined by the local dictionary M119895 is oftenunderdetermined the patch based representation is moreaccurate than the overall face image representation

3 Patch Based Collaborative RepresentationUsing Gabor Feature and MeasurementMatrix for Face Recognition

31 Gabor Feature Gabor feature has been widely used inFR because of its robustness in illumination expression andpose compared to holistic feature Yang and Zhang haveapplied multiscale and multidirectional Gabor dictionary tothe SRC for FR [19] which further improves the robustness ofthe algorithm Inspired by previous works in this paper weintegrate Gabor feature into the PCRC framework to improveits robustness

A Gabor filter with multidirection 120583 and multiscale ] isdefined as follows [25 26]

Ψ120583] (z) =10038171003817100381710038171003817k120583]10038171003817100381710038171003817

2

1205752 119890(minusk120583]2z221205752) [119890119894k120583]119911 minus 119890minus12057522]

k120583] = k] sdot 119890119894120595120583 120595120583 = 1205871205838 k] = 119896max

119891] (3)

where the coordinates of the pixel are z = (119909 119910) 119896max isthe maximum frequency and the interval factor of the kernelfunction distance is denoted as 119891 The bandwidth of the filteris determined by 120575 The convolution of the target image Imgand the wavelet kernelΨ120583] is expressed as

G120583] = Img (z) lowastΨ120583] (z) = M120583] sdot exp (119894120579120583] (z)) (4)

where M120583] (120583 = 0 1 7 ] = 0 1 4) is the amplitudeof Gabor 120579120583](119911) is the phase of Gabor and the local energychange in the image is expressed by amplitude informationBecause the Gabor phase changes periodically with the spaceposition and the amplitude is relatively smooth and stable [1925 26] only the magnitude of Gabor was used in this papersuch as Figure 2

32 Measurement Matrix Unfortunately although Gaborfeature can be used to enhance the robustness of faceimage representation it brings higher dimensions to thetraining sets than holistic feature does In other words thecomputation cost and computation time are increased Inorder to solve the problem caused by higher dimension ofthe training sets a further dimension reduction is necessaryWe proposed to use PCA [31ndash33] for our method The stepsof PCA are as follows assuming 119873 sample images x119894 isin R119898119894 = 1 2 119873 Firstly normalize each sample (subtractthe mean and then divide the variance) convert vector119909lowast119894 which accord with normal distribution (0 1) Secondlycompute the eigenvectors of covariance matrix 119876 = 119883119883119879119883 = [119909lowast1 119909lowast2 119909lowast119873] 120582119864 = 119876119864 119864 is the eigenvectorcorresponding to the eigenvalue 120582 Thirdly the eigenvectoris sorted according to the size of the eigenvalues the first119901 eigenvectors are extracted to form a linear transformationmatrix 119882PCA isin R119898times119901 We can use 119910119894 = 119882119879PCA119909119894 to reducedimension But if 119898 ≫ 119873 the dimension of 119883119883119879 isinR119898times119898 would be very large In order to solve this problemsingular value decomposition is usually usedThe eigenvalues

4 Mathematical Problems in Engineering

Gabor feature

Magnitude featurePatches

middotmiddotmiddot

Figure 2 Patch based collaborative representation using Gabor feature

120582 and eigenvectors 120583119894 of 119883119883119879 are obtained by calculatingthe eigenvalues 120575119894 and eigenvectors ]119894 of 119883119879119883 (the first 119901)120583119894 = (1radic120575119894)119883]119894 119894 = 1 2 119901

In summary we found that PCA and its related algo-rithms have two obvious shortcomings [37 38] Firstly theleading eigenvectors encode mostly illumination and expres-sion rather than discriminating information Secondly in theactual calculation the amount of calculation is very large andwill fail in small samples

Inspired by the method of using random face for featureextraction illustrated in this literature [38] we used theRandom Gaussian matrices Φ isin R119872times119873 (119872 ≪ 119873) as a mea-surement matrix to measure face images The measurementmatrix Φ is used to measure the redundant dictionary X toobtain 119860 = ΦX isin R119872times119878 where X = [1199091 1199092 119909119878] isinR119873times119878 In Figure 3(a) for any test image 119909 measurements 119910were obtained by 119910 = Φ119909 isin R119872times1 In essence utilizingthe measurement matrix to reduce the dimension of theimage is different from the sparse representation theory Thedimension 119872 of the measurements 119910 was measured by themeasurement matrix Φ and was not limited by the numberof training samples

However some literatures [35 39] suggested that theRandom Gaussian matrices are uncertain and limit itspractical application And Toplitz and cyclic matrices wereproposed for signal reconstruction The Toplitz and cyclicmatrices rotate the row vectors to generate all matricesUsually the value of the vector in Toplitz and cyclic matricesis plusmn1 and each element is independent of the Bernoullidistribution Therefore it is easy to implement hardware inpractical application Based on the above analysis we furtherused Toplitz and cyclic matrices and their improved method(namely the Deterministic sparse Toeplitz matrices [36]) forour method Some of the relevant measurement matricesused in this paper will be described in detail The operat-ing mechanism of each measurement matrix is shown inFigure 3

321 Random Gaussian Matrices The format of 119898 times 119899Random Gaussian matrices is expressed as follows [34 38]

[[[[[[[

11988611 11988612 sdot sdot sdot 119886111989911988621 11988622 sdot sdot sdot 1198862119899 d

1198861198981 1198861198982 sdot sdot sdot 119886119898119899

]]]]]]]

(5)

each element 119886119894119895 is independently subject to a Gaussiandistribution whose mean is 0 and variance is 1119898

322 Toplitz andCyclicMatrices Theconcrete formof119872times119873Toplitz and cyclic matrices [35 39] is presented below

[[[[[[[

119886119873 119886119873minus1 sdot sdot sdot 1198861119886119873+1 119886119873 sdot sdot sdot 1198862 d

119886119873+119872minus1 119886119873+119872minus2 sdot sdot sdot 119886119872

]]]]]]]

(6)

[[[[[[[

119886119873 119886119873minus1 sdot sdot sdot 11988611198861 119886119873 sdot sdot sdot 1198862 d

119886119872minus1 119886119872minus2 sdot sdot sdot 119886119872

]]]]]]]

(7)

Equation (6) is the Toplitz matrices the main diagonal 119886119894119895 =119886119894+1119895+1 are constants If 119886119894 = 119886119873+119894 the additional conditionof (6) is satisfied then it becomes a cyclic matrices and itselement 119886119894119873+119872minus1119894=1 follows a certain probability distribution119875(119886)323 Deterministic Sparse Toeplitz Matrices The construc-tion of Deterministic sparse Toeplitz matrices [36] is basedon the Toplitz and cyclic matrices which is illustrated in thispaper by the example of the random spacing Toplitz matriceswith an interval of Δ = 2 The independent elements in thefirst row and the first column of (6) constitute the vector T

[119886119873 119886119873minus1 sdot sdot sdot 1198861 119886119873+1 sdot sdot sdot 119886119873+119872minus1] (8)Conducting random sparse spacing to (7) one can seethat 119879 contains all the independent elements in (8) Then

Mathematical Problems in Engineering 5

y x

measurements

Φ

Mtimes 1

Mtimes N

=

(a) The linear measurement process of measurementmatrix

The gabor feature of an image patch(N dimension)

Random Gaussian matrices

Toeplitz and cyclic matrices

Deterministic sparse Toeplitz

Deterministic sparse Toeplitz

Deterministic sparse Toeplitz

(Mtimes N)

(Mtimes N)

(Mtimes N)

(Mtimes N)

(Mtimes N)

matrices (Δ = 4)

matrices (Δ = 8)

matrices (Δ = 16)

(b) The structure of the measurement matrix

Figure 3 The operating mechanism of measurement matrix

assignment operations are performed on 119879 where the ele-ments 119886119894 (119894 isin Λ Λ is the (119873 + 119872 minus 1)Δ indexesrandomly chosen from the index sequence 1 rarr 119873 + 119872 minus1) obey the independent and identically distributed (iid)Random Gaussian distribution while the other elements are0 Finally we obtained the Deterministic sparse Toeplitzmatrices 119886119894+1119895+1 = 119886119894119895 according to the characteristics ofconstruction of the Toplitz and cyclic matrices

33 The Proposed Face Recognition Approach Although thePCRC can indeed solve the problem of small sample size thismethod is still based on the original feature of the patch andthe robustness and accuracy are yet to be improved Based onthe above analysis Gabor feature and measurement matrixare infused into the PCRC for FR which not only solves theproblemof small sample size but also enhances the robustnessand efficiency of the method

Our proposed method for FR is summarized as follows

Step 1 (input) Face training sample X and test sample y

Step 2 (patch) Divide the 119896 face training samples X into 119904patches and divide the test sample y into 119904 patches where y119894andM119894119895 are the position corresponding to the sample patches

X =[[[[[[[[[[[

M1M119894M119904

]]]]]]]]]]]

=

[[[[[[[[[[[[

M11 sdot sdot sdot M1119895 sdot sdot sdot M1119896

M1198941 sdot sdot sdot M119894119895 sdot sdot sdot M119894119896

M1199041 sdot sdot sdot M119904119895 sdot sdot sdot M119904119896

]]]]]]]]]]]]

y =[[[[[[[[[[[

y1y119894y119904

]]]]]]]]]]]

(9)

6 Mathematical Problems in Engineering

Step 3 (extract and measure features) (1) Extract Gaborfeature from patches of training samples and test samplerespectively to obtain

G =[[[[[[[[[[[

G1G119894G119904

]]]]]]]]]]]

=

[[[[[[[[[[[[

G11 sdot sdot sdot G1119895 sdot sdot sdot G1119896

G1198941 sdot sdot sdot G119894119895 sdot sdot sdot G119894119896

G1199041 sdot sdot sdot G119904119895 sdot sdot sdot G119904119896

]]]]]]]]]]]]

Gy =[[[[[[[[[[[

Gy1

Gy119894

Gy119904

]]]]]]]]]]]

(10)

(2) Carry out themeasurement and reduce the dimensionof each patch G119894119895 (119894 = 1 2 119904 119895 = 1 2 119896) using themeasurement matrixΦ to obtain measured signal

A =[[[[[[[[[[[

A1A119894A119904

]]]]]]]]]]]

=

[[[[[[[[[[[[

a11 sdot sdot sdot a1119895 sdot sdot sdot a1119896 a1198941 sdot sdot sdot a119894119895 sdot sdot sdot a119894119896 a1199041 sdot sdot sdot a119904119895 sdot sdot sdot a119904119896

]]]]]]]]]]]]

=

[[[[[[[[[[[[

ΦG11 sdot sdot sdot ΦG1119895 sdot sdot sdot ΦG1119896 ΦG1198941 sdot sdot sdot ΦG119894119895 sdot sdot sdot ΦG119894119896 ΦG1199041 sdot sdot sdot ΦG119904119895 sdot sdot sdot ΦG119904119896

]]]]]]]]]]]]

B =[[[[[[[[[[[

b1b119894b119904

]]]]]]]]]]]

=[[[[[[[[[[[

ΦGy1ΦGy119894ΦGy119904

]]]]]]]]]]]

(11)

Step 4 Collaborative representation of each patch measure-ment signal bi of the test sample over the training samplemeasurement signal local dictionary Ai

119894 = argmin120588119894

1003817100381710038171003817b119894 minus A119894120588119894100381710038171003817100381722 + 120582 1003817100381710038171003817120588119894100381710038171003817100381722 119894 = 1 2 119904 (12)

Equation (12) can be easily derived as follows

119894 = Pb119894 P = (A119879119894 A119894 + 120582I)minus1 A119879119894 119894 = 1 2 119904 (13)

where I is unit matrix

Step 5 Recognition results of patch y119894 are

119903119894119895 (yi) =10038171003817100381710038171003817bi minus aij119894

100381710038171003817100381710038172100381710038171003817100381711989410038171003817100381710038172119894 = 1 2 119904 119895 = 1 2 119896

ID (119903119894119895) = argmin119895

[119903119894119895 (yi)] (14)

Step 6 Use voting to obtain the final recognition result

4 Experimental Analysis

The proposed approaches GPCRC and its improved meth-ods were evaluated in three publicly available databasesExtended Yale B [40ndash42] CMU PIE [43 44] and LFW [4546] To show the effectiveness of the GPCRC we comparedour methods with four classical methods and their improvedmethods namely SRC [13] CRC [15] SSRC [14] and PCRC[16]Then we tested our GPCRC using several measurementmatrix namely Random Gaussian matrices Toeplitz andcyclic matrices and Deterministic sparse Toeplitz matrices(Δ = 4 8 16) to reduce the dimension of the transformedsignal and compare their performance with the conventionalPCA methods In all the following experiments we ran theMATLAB 2015b on a typical Intel(R) Core(TM) i5-3470CPU 320GHzWindows7 x64 PC In our implementation ofGabor filters the parameters were set as 119891 = radic2 119870max =1205872 120590 = 120587 120583 = 0 1 7 ] = 0 1 4 for all theexperiments below All the programs were run 20 times oneach database and the mean and variance of each methodwere reported In order to report the best result of eachmethod the parameter120582used in SRC SSRCCRC andPCRCwas set to 0001 [13] 0001 [14] 0005 [15] and 0001 [16 17]respectively The patch sizes were used in PCRC and GPCRCand its improved methods are 8 times 8 [16 17] and 16 times 16respectively

41 Extended Yale B To test the robustness of the proposedmethod on illumination we used the classic Extended YaleB database [40ndash42] because faces from Extended Yale Bdatabase were acquired in different illumination conditionsThe Extended Yale B database contains 38 human subjectsunder 9 poses and 64 illumination conditions For obviouscomparison all frontal-face images marked with P00 wereused in our experiment and the face images size was down-sampled to 32 times 32 10 20 faces from each individual wereselected randomly as training sample 30 others from eachindividual were selected as test sample Figure 4 shows someP00 marked samples from the Extended Yale B databaseThe experimental results of each method are shown inTable 1 In Figure 5 we compared the recognition rates of

Mathematical Problems in Engineering 7

Table 1 Recognition rate () on Extended Yale B database

Features Methods Training sample size10 20

Intensity feature

SRC 7615 plusmn 960 8050 plusmn 432CRC 7860 plusmn 1058 8300 plusmn 664SSRC 8753 plusmn 791 9001 plusmn 438PCRC 9044 plusmn 109 9425 plusmn 101

LBP feature

SRC 8356 plusmn 1132 8974 plusmn 952CRC 8302 plusmn 1019 9030 plusmn 327SSRC 9058 plusmn 639 9589 plusmn 301PCRC 7967 plusmn 1257 8725 plusmn 508

Gabor feature

SRC 8525 plusmn 536 9150 plusmn 350CRC 8618 plusmn 443 9000 plusmn 280SSRC 8706 plusmn 949 9224 plusmn 506PCRC 9434 plusmn 206 9883 plusmn 094

Figure 4 Some marked samples of the Extended Yale B database

Feature Dimension

101520253035404550556065707580859095

100

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(a) 10 training samples

Reco

gniti

on ra

te (

)

Feature Dimension

05

101520253035404550556065707580859095

100

0 100 200 300 400 500 600 700 800 900 10001100

Random Gaussian matricesToeplitz and cyclic matrices

PCA

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(b) 20 training samples

Figure 5 The performance of each dimension reduction method for GPCRC on Extended Yale B database

8 Mathematical Problems in Engineering

Table 2 The speed of each dimension reduction methods

Method Random Gaussian Toeplitz and cyclic PCATime(s) 11131 10836 417295Method Deterministic sparse Toeplitz

(Δ = 4) (Δ = 8) (Δ = 16)Time(s) 12643 12226 12678

Figure 6 Some marked samples of the CMU PIE database

various methods with the space dimensions 32 64 128256 512 and 1024 for each patch in GPCRC and thosenumbers correspond to downsampling ratios of 1320 1160180 140 120 and 110 respectively In Table 1 it can beclearly seen that GPCRC achieves the highest recognitionrate performance in all experiments In Figure 6 we cansee the performance of various dimensionality reductionmethods for GPCRC 10 samples of each individual inFigure 5(a) were used as training samples When the featuredimension is low (le256) the best performance of PCA is9280 (256 dimension) which is significantly higher thanthat of the measurement matrix at 256 dimension RandomGaussian matrices are 8877 Toeplitz and cyclic matricesare 8517 Deterministic sparse Toeplitz matrices (Δ = 4)are 8667 Deterministic sparse Toeplitz matrices (Δ = 8)are 8569 and Deterministic sparse Toeplitz matrices (Δ =16) are 8605 but none of these has been as good as theperformance of the original dimension (10240 dimension)So the reason why the performance of PCA is significantlybetter than the measurement matrix at low dimension canbe analyzed through their operation mechanism the PCAtransforms the original data into a set of linearly independentrepresentations of each dimension in a linear transformationwhich can extract the principal component of the dataThese principal components can represent image signalsmore accurately [31ndash33] The theory of compressed sensing[34ndash36] states that when the signal is sparse or compressiblein a transform domain the measurement matrix whichis noncoherent with the transform matrix can be used toproject transform coefficients to the low-dimensional vec-tor and this projection can maintain the information forsignal reconstruction The compressed sensing technologycan achieve the reconstruction with high accuracy or highprobability using small number of projection data In the caseof extremely low dimension the reconstruction informationis insufficient and the original signal cannot be reconstructedaccurately So the recognition rate is not high When thedimension is increased and the reconstructed informationcan more accurately reconstruct the original signal the

recognition rate will improve When the feature dimension ishigher (ge512) PCA cannot work because of the sample sizeAt 512 dimension the measurement matrix (Deterministicsparse Toeplitz matrices (Δ = 8) are 9521 Deterministicsparse Toeplitz matrices (Δ = 16) are 9491) has reached theperformance of the original dimension (10240 dimension)and the dimension is only 120 of the original dimension Atthe 110 of the original dimension the performance of themeasurement matrix has basically achieved the performanceof the original dimension Random Gaussian matrices are9560 Deterministic sparse Toeplitz matrices (Δ = 4) are9464 Deterministic sparse Toeplitz matrices (Δ = 8) are9456 and Deterministic sparse Toeplitz matrices (Δ = 16)are 9498 In Figure 6(b) 20 samples of each individualwere used as training samples When the feature dimensionle 512 the best performance is PCA which is 9750 (512dimension) At 1024 dimension PCA cannot work andthe performance of the measurement matrix has basicallyachieved the performance of the original dimension RandomGaussian matrices are 9880 Deterministic sparse Toeplitzmatrices (Δ = 4) are 9868 Deterministic sparse Toeplitzmatrices (Δ = 8) are 9901 and Deterministic sparseToeplitzmatrices (Δ = 16) are 9877 In Table 2 we fixed thedimension of each reductionmethods at 512 dimension in thecase of 20 training samples In general the complexity of PCAis119874(1198993) [47] where 119899 is the number of rows of the covariancematrix The example we provided consists of 38 individualsand each individual contains 20 samples Since the numberof Gabor features (10240) for each patch far outweighs thetotal number of training samples (760) the number of rowsof the covariancematrix is determined by the total number oftraining samples and the complexity of PCA is approximately119874(7603) However in the proposed measurement matrixalgorithm the measurement matrix is formed by a certaindeterministic distribution (eg Gaussian distribution) andstructure according to the required dimension and length ofthe signal so that the measurement matrix demonstrates lowcomplexity [36 48 49] The actual speed of each dimensionreduction methods for one patch of all samples (including alltraining samples and all testing samples) is listed Based onthe above analysis we can see that the PCA method has themost outstanding performance but it is limited by the samplesize and it is time-consumingThemeasurement matrix doesnot perform well in low dimension but is not limited by thesample size When the dimension reaches a certain value itsperformance was the same as that of the original dimensionThus the dimension of the data can be reduced without lossof recognition rate

Mathematical Problems in Engineering 9

15

20

25

30

35

40

45

50

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(a) 2 training samples

10

15

20

25

30

35

40

45

50

55

60

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 10001100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(b) 3 training samples

05

10152025303540455055606570

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 10001100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(c) 4 training samples

05

101520253035404550556065707580

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(d) 5 training samples

Figure 7 The performance of each dimension reduction method for GPCRC on CMU PIE database

42 CMU PIE In order to further test the robustness inillumination pose and expression we utilized a currentlypopular database CMU PIE [43 44] a database consistingof 41368 images of 68 people for each person there are 13different poses 43 different illumination conditions and 4different expressions To testify the advantage of our methodin small sample size condition we randomly selected 2 3 4

and 5 face samples from each individual as training sampleand other 40 face samples from each individual as test sam-plesThe face images size is downsampled to 32times32 Figure 6shows some marked samples of the CMU PIE database Theexperimental results of each method are shown in Table 3GPCRC has the best results In Figure 7 we compared therecognition rates of various dimension reduction methods

10 Mathematical Problems in Engineering

Table 3 Recognition rate () on CMU PIE database

Features Method Training sample size2 3 4 5

Intensity feature

SRC 2821 plusmn 543 3936 plusmn 982 4473 plusmn 1529 6054 plusmn 645CRC 3034 plusmn 615 4062 plusmn 920 4435 plusmn 742 6103 plusmn 782SSRC 3752 plusmn 834 5474 plusmn 850 5732 plusmn 982 6673 plusmn 885PCRC 4232 plusmn 117 5646 plusmn 188 6116 plusmn 291 7047 plusmn 387

LBP feature

SRC 3842 plusmn 646 5337 plusmn 710 5367 plusmn 521 6331 plusmn 724CRC 3732 plusmn 775 5483 plusmn 615 5427 plusmn 831 6425 plusmn 690SSRC 4317 plusmn 571 5684 plusmn 921 6453 plusmn 476 6825 plusmn 435PCRC 4121 plusmn 710 5221 plusmn 1176 6254 plusmn 621 7022 plusmn 307

Gabor feature

SRC 4143 plusmn 420 5336 plusmn 540 5358 plusmn 732 6679 plusmn 545CRC 4025 plusmn 515 5535 plusmn 328 5668 plusmn 742 6624 plusmn 618SSRC 4455 plusmn 600 5596 plusmn 620 6250 plusmn 316 7041 plusmn 479PCRC 4857 plusmn 129 5845 plusmn 194 6882 plusmn 256 7318 plusmn 244

for GPCRC Similarly we can see that PCA cannot achievethe best recognition rate because of the sample size limitand at 1024 dimension the performance of the measurementmatrix has basically achieved the performance of the originaldimensionThe recognition rate of eachmeasurementmatrixis shown below

(i) Random Gaussian matrices are 4873 (2 trainingsamples) 5966 (3 training samples) 6886 (4training samples) and 7360 (5 training samples)

(ii) Toeplitz and cyclic matrices are 4889 (2 trainingsamples) 5830 (3 training samples) 6544 (4training samples) and 7312 (5 training samples)

(iii) Deterministic sparse Toeplitz matrices (Δ = 4)are 4763 (2 training samples) 5868 (3 trainingsamples) 6703 (4 training samples) and 7178 (5training samples)

(iv) Deterministic sparse Toeplitz matrices (Δ = 8)are 4869 (2 training samples) 5855 (3 trainingsamples) 6891 (4 training samples) and 7427 (5training samples)

(v) Deterministic sparse Toeplitz matrices (Δ = 16)are 4694 (2 training samples) 6000 (3 trainingsamples) 6625 (4 training samples) and 7169 (5training samples)

Through the above analysis we further validated the feasibil-ity of the measurement matrix in the premise of ensuring therecognition rate

43 LFW In practice the number of training samples is verylimited and only one or two can be obtained from individualidentity documents To simulate the actual situationwe selectthe LFWdatabaseThe LFWdatabase includes frontal imagesof 5749 different subjects in unconstrained environment[45] LFW-a is a version of LFW after alignment using com-mercial face alignment software [46] We chose 158 subjectsfrom LFW-a each subject contains no less than ten samplesFor each subject we randomly choose 1 to 2 samples fromeach individual for training and another 5 samples from each

Figure 8 Some marked samples of the LFW database

individual for testing The face images size is downsampledto 32 times 32 Figure 8 shows some marked samples of the LFWdatabaseThe experimental results of eachmethod are shownin Table 4 it can be clearly seen that the GPCRC achievesthe highest recognition rate performance in all experimentswith the training sample size from 1 to 2 In Figure 9 wecan also see the advantages of the measurement matrix insmall sample size At 1024 dimension the performance ofthe measurement matrix in the single training sample isas follows Random Gaussian matrices are 2618 Toeplitzand cyclic matrices are 2468 Deterministic sparse Toeplitzmatrices (Δ = 4) are 2443 Deterministic sparse Toeplitzmatrices (Δ = 8) are 2585 and Deterministic sparseToeplitz matrices (Δ = 16) are 2528 and the performancesof the measurement matrix in the two training samples areas follows Random Gaussian matrices are 3968 Toeplitzand cyclic matrices are 3772 Deterministic sparse Toeplitzmatrices (Δ = 4) are 3756 Deterministic sparse Toeplitzmatrices (Δ = 8) are 3991 and Deterministic sparseToeplitz matrices (Δ = 16) are 3825

5 Conclusion

In order to alleviate the influence of unreliability environmenton small sample size in FR in this paper we applied improvedmethod for Gabor local features to PCRC we proposedto use the measurement matrices to reduce the dimensionof the transformed signal Several important observationscan be summarized as follows (1) The proposed GPCPCmethod can effectively deal with the influence of unreliabilityenvironment in small sample FR (2) The measurement

Mathematical Problems in Engineering 11

Table 4 Recognition rate () on LFW database

Features Method Training sample size1 2

Intensity feature

SRC 1058 plusmn 720 1902 plusmn 829CRC 908 plusmn 815 2039 plusmn 720SSRC 1703 plusmn 587 2852 plusmn 750PCRC 2229 plusmn 396 3329 plusmn 528

LBP feature

SRC 1552 plusmn 554 2531 plusmn 742CRC 1623 plusmn 621 2622 plusmn 657SSRC 1916 plusmn 341 3210 plusmn 426PCRC 2126 plusmn 473 3159 plusmn 611

Gabor feature

SRC 1868 plusmn 423 3135 plusmn 640CRC 1731 plusmn 815 3083 plusmn 520SSRC 2130 plusmn 721 3367 plusmn 433PCRC 2582 plusmn 342 3960 plusmn 354

0

5

10

15

20

25

30

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(a) Single training sample

0

5

10

15

20

25

30

35

40

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 10001100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(b) 2 training samples

Figure 9 The performance of each dimension reduction method for GPCRC on LFW database

matrix proposed to deal with the high dimension in GPCRCmethod can effectively improve the computational efficiencyand computational speed and they were able to overcome thelimitations of the PCA method

Conflicts of Interest

None of the authors have conflicts of interest

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China (no 61672386) Humanities and Social

Sciences Planning Project of Ministry of Education (no16YJAZH071) Anhui Provincial Natural Science Foundationof China (no 1708085MF142) University Natural ScienceResearch Project of Anhui Province (no KJ2017A259) andProvincial Quality Project of Anhui Province (no 2016zy131)

References

[1] J Galbally S Marcel and J Fierrez ldquoBiometric antispoofingmethods a survey in face recognitionrdquo IEEE Access vol 2 pp1530ndash1552 2014

[2] S Nagpal M Singh R Singh and M Vatsa ldquoRegularizeddeep learning for face recognition with weight variationsrdquo IEEEAccess vol 3 pp 3010ndash3018 2015

12 Mathematical Problems in Engineering

[3] J W Tanaka and D Simonyi ldquoThe parts and wholes of facerecognition a review of the literaturerdquoThe Quarterly Journal ofExperimental Psychology no 10 pp 1ndash37 2016

[4] W Yang Z Wang and C Sun ldquoA collaborative representationbased projectionsmethod for feature extractionrdquoPattern Recog-nition vol 48 no 1 pp 20ndash27 2015

[5] O Boiman E Shechtman and M Irani ldquoIn defense of nearest-neighbor based image classificationrdquo in Proceedings of the 26thIEEE Conference on Computer Vision and Pattern Recognition(CVPR rsquo08) pp 1ndash8 June 2008

[6] K VenkataNarayana V Manoj and K KSwathi ldquoEnhancedface recognition based on PCA and SVMrdquo International Journalof Computer Applications vol 117 no 2 pp 40ndash42 2015

[7] E Owusu and Q R Mao ldquoAn svm cadaboost-based face detec-tion systemrdquo Journal of Experimental amp Theoretical ArtificialIntelligence vol 26 no 4 pp 477ndash491 2014

[8] B V Dasarathy ldquoNearest neighbor (NN) norms NN patternclassification techniquesrdquo Los Alamitos IEEE Computer SocietyPress vol 13 no 100 pp 21ndash27 1991

[9] Y Liu S S Ge C Li and Z You ldquo120581-NS a classifier by thedistance to the nearest subspacerdquo IEEE Transactions on NeuralNetworks and Learning Systems vol 22 no 8 pp 1256ndash12682011

[10] J Hamm and D D Lee ldquoGrassmann discriminant analysis aunifying view on subspace-based learningrdquo in Proceedings ofthe 25th International Conference onMachine Learning pp 376ndash383 July 2008

[11] R Wang S Shan X Chen and W Gao ldquoManifold-manifolddistance with application to face recognition based on imagesetrdquo in Proceedings of the 26th IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo08) pp 1ndash8 June 2008

[12] J Chien and C Wu ldquoDiscriminant waveletfaces and nearestfeature classifiers for face recognitionrdquo IEEE Transactions onPattern Analysis and Machine Intelligence vol 24 no 12 pp1644ndash1649 2002

[13] J Wright A Ganesh Z Zhou A Wagner and Y Ma ldquoDemorobust face recognition via sparse representationrdquo in Proceed-ings of the 8th IEEE International Conference on Automatic Faceamp Gesture Recognition (FG rsquo08) pp 1-2 IEEE Amsterdam TheNetherlands September 2008

[14] W Deng J Hu and J Guo ldquoIn defense of sparsity based facerecognitionrdquo in Proceedings of the 26th IEEE Conference onComputer Vision and Pattern Recognition (CVPR rsquo13) pp 399ndash406 June 2013

[15] L Zhang M Yang and X Feng ldquoSparse representation orcollaborative representation Which helps face recognitionrdquo inProceedings of the IEEE International Conference on ComputerVision (ICCV rsquo11) pp 471ndash478 Barcelona Spain November2011

[16] P Zhu L Zhang Q Hu and S C K Shiu ldquoMulti-scale patchbased collaborative representation for face recognition withmargin distribution optimizationrdquo in European Conference onComputer Vision vol 7572 of Lecture Notes in Computer Sciencepp 822ndash835 2012

[17] D Yang W Chen J Wang and Y Xu ldquoPatch based face recog-nition via fast collaborative representation based classificationand expression insensitive two-stage votingrdquo International Con-ference on Computational Science and Its Applications vol 9787pp 562ndash570 2016

[18] J Lai and X Jiang ldquoClasswise sparse and collaborative patchrepresentation for face recognitionrdquo IEEE Transactions onImage Processing vol 25 no 7 pp 3261ndash3272 2016

[19] M Yang and L Zhang ldquoGabor feature based sparse represen-tation for face recognition with gabor occlusion dictionaryrdquo inProceedings of the 11th European Conference on Computer Vision(ECCV rsquo10) pp 448ndash461 Springer Crete Greece 2010

[20] M R D Rodavia O Bernaldez and M Ballita ldquoWeb andmobile based facial recognition security system using Eigen-faces algorithmrdquo in Proceedings of the 2016 IEEE InternationalConference on Teaching Assessment and Learning for Engineer-ing (TALE rsquo16) pp 86ndash92 Bangkok Thailand December 2016

[21] P N Belhumeur J P Hespanha and D J Kriegman ldquoEigen-faces vs fisherfaces recognition using class specific linearprojectionrdquo IEEE Transactions on Pattern Analysis andMachineIntelligence vol 19 no 7 pp 711ndash720 1997

[22] L P Yang W G Gong W H Li and X H Gu ldquoRandomsampling subspace locality preserving projection for face recog-nitionrdquo Optics amp Precision Engineering vol 16 no 8 pp 1465ndash1470 2008

[23] X Sun M Chen and A Hauptmann ldquoAction recognition vialocal descriptors andholistic featuresrdquo inProceedings of the 2009IEEE Conference on Computer Vision and Pattern Recognition(CVPR rsquo09) pp 58ndash65 June 2009

[24] X Wang Y Zhang X Mu and F S Zhang ldquoThe facerecognition algorithm based on improved lbprdquo InternationalJournal of Optoelectronic Engineering vol 1 no 4 pp 76ndash802012

[25] J Ahmad U Ali and R J Qureshi ldquoFusion of thermal andvisual images for efficient face recognition using gabor filterrdquo inProceedings of the IEEE International Conference on ComputerSystems and Applications pp 135ndash139 March 2006

[26] V Struc R Gajsek and N Pavesic ldquoPrincipal gabor filters forface recognitionrdquo in Proceedings of the IEEE 3rd InternationalConference on Biometrics Theory Applications and Systems(BTAS rsquo09) pp 1ndash6 Washington DC USA September 2009

[27] M Li X Yu K H Ryu S Lee and N Theera-Umpon ldquoFacerecognition technology development with gabor PCA andSVM methodology under illumination normalization condi-tionrdquo Cluster Computing no 3 pp 1ndash10 2017

[28] J Yang D Zhang and A F Frangi ldquoTwo-dimensional PCAa new approach to apperence -based face representationand recognitionrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 26 no 1 pp 131ndash137 2004

[29] H Yu and J Yang ldquoA direct LDA algorithm for high-dimensional data with application to face recognitionrdquo PatternRecognition vol 34 no 10 pp 2067ndash2070 2001

[30] J Kim J Choi J Yi andMTurk ldquoEffective representation usingICA for face recognition robust to local distortion and partialocclusionrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 27 no 12 pp 1977ndash1981 2005

[31] C G Turhan andH S Bilge ldquoClass-wise two-dimensional PCAmethod for face recognitionrdquo IET Computer Vision vol 11 no4 pp 286ndash300 2017

[32] A Mashhoori and M Zolghadri Jahromi ldquoBlock-wise two-directional 2DPCA with ensemble learning for face recogni-tionrdquo Neurocomputing vol 108 pp 111ndash117 2013

[33] M P Rajath Kumar R Keerthi Sravan and K M AishwaryaldquoArtificial neural networks for face recognition using PCA andBPNNrdquo in Proceedings of the 35th IEEE Region 10 Conference(TENCON rsquo15) pp 1ndash6 IEEE Macao China November 2015

[34] E J Candes and T Tao ldquoNear-optimal signal recovery fromrandom projections universal encoding strategiesrdquo Institute ofElectrical and Electronics Engineers Transactions on InformationTheory vol 52 no 12 pp 5406ndash5425 2006

Mathematical Problems in Engineering 13

[35] W U Bajwa J D Haupt G M Raz S J Wright and R DNowak ldquoToeplitz-structured compressed sensing matricesrdquo inProceedings of the IEEESP 14th Workshop on Statistical SignalProcessing (SSP rsquo07) pp 294ndash298 IEEE Madison WI USAAugust 2007

[36] C Zhang H-R Yang and S Wei ldquoCompressive sensing basedon deterministic sparse toeplitz measurement matrices withrandom pitchrdquo Acta Automatica Sinica vol 38 no 8 pp 1362ndash1369 2012

[37] E S Takemura M R Petraglia and A Petraglia ldquoAn imagesuper-resolution algorithm based on wiener filtering andwavelet transformrdquo inProceedings of the 2013 IEEEDigital SignalProcessing and Signal Processing Education Meeting (DSPSPErsquo13) pp 130ndash134 August 2013

[38] M Yang Face Recognition via Sparse Representation WileyEncyclopedia of Electrical and Electronics Engineering 2008

[39] E J Candes J Romberg and T Tao ldquoRobust uncertaintyprinciples exact signal reconstruction from highly incompletefrequency informationrdquo Institute of Electrical and ElectronicsEngineers Transactions on InformationTheory vol 52 no 2 pp489ndash509 2006

[40] A S Georghiades P N Belhumeur and D J Kriegman ldquoFromfew to many illumination cone models for face recognitionunder variable lighting and poserdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 23 no 6 pp 643ndash6602001

[41] N McLaughlin J Ming and D Crookes ldquoLargest matchingareas for illumination and occlusion robust face recognitionrdquoIEEE Transactions on Cybernetics vol 47 no 3 pp 796ndash8082017

[42] Y Xu X Fang J Wu X Li and D Zhang ldquoDiscriminativetransfer subspace learning via low-rank and sparse representa-tionrdquo IEEE Transactions on Image Processing vol 25 no 2 pp850ndash863 2016

[43] H Roy and D Bhattacharjee ldquoLocal-gravity-face (lg-face) forillumination-invariant and heterogeneous face recognitionrdquoIEEE Transactions on Information Forensics and Security vol 11no 7 pp 1412ndash1424 2016

[44] Y Tai J Yang Y Zhang L Luo J Qian and Y ChenldquoFace recognition with pose variations and misalignment viaorthogonal procrustes regressionrdquo IEEE Transactions on ImageProcessing A Publication of the IEEE Signal Processing Societyvol 25 no 6 pp 2673ndash2683 2016

[45] G B Huang M Narayana and E Learned-Miller ldquoLabeledfaces in the wild a database forstudying face recognition inunconstrained environmentsrdquoMonth 2008

[46] LWolf T Hassner and Y Taigman ldquoSimilarity scores based onbackground samplesrdquo in Asian Conference on Computer Visionvol 5995 pp 88ndash97 2009

[47] J Wang Geometric Structure of High-Dimensional Data andDimensionality Reduction Springer 2012

[48] Q Wang J Li and Y Shen ldquoA survey on deterministicmeasurement matrix construction algorithms in compressivesensingrdquo Acta Electronica Sinica vol 41 no 10 pp 2041ndash20502013

[49] WYin SMorgan J Yang andY Zhang ldquoPractical compressivesensing with toeplitz and circulant matricesrdquo in Proceedingsof the Visual Communications and Image Processing 2010 vol7744 July 2010

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 2: SUBMITTED TO MATHEMATICAL PROBLEMS IN …downloads.hindawi.com/journals/mpe/aip/3025264.pdf · SUBMITTED TO MATHEMATICAL PROBLEMS IN ENGINEERING 1 Patch based Collaborative Representation

2 Mathematical Problems in Engineering

representation related methods [13 14] achieved a greatsuccess in FR those methods focus on the role of the 1198971-norm while ignoring the role of CR [15] which uses allclasses of training samples to represent the target test sampleIn this study [16] Zhu et al argued that both SRC andcollaborative representation based classifier (CRC) sufferserious performance degradation when the training samplesize is very small because the test sample cannot be wellrepresented In order to solve the small sample size problemthey proposed to conduct CRC on the patch and named it thepatch based CRC (PCRC)

The PCRC and some related works [17 18] have demon-strated their effects on small sample size problem of FRhowever some key issues remain to be further optimizedOn one hand all the PCRC related works used the origi-nal face feature but the original feature cannot effectivelyhandle the variations of illumination pose facial expressionand noise [19] On the other hand the data redundancyexisting in these methods leads to poor performance inclassification accuracy and computational cost The facefeature problem has been noticed by some recent works inwhich an efficient and effective image representation hasbeen proposed by using local and holistic features TheEigenface [20 21] Randomface [22] and Fisherface [21]are all classical holistic features [23] but some other worksargued that those holistic features can be easily affected byvariables such as illumination pose facial expression andnoise Therefore they introduced some local features suchas LBP [24] and Gabor filter [25 26] Gabor filter has beensuccessfully and widely used in FR [26 27] Gabor featurecould effectively extract the face local features at multiplescales and multiple directions however this may lead to asharp rise in data volumes [28] The key to solving the largedata volume problem is dimensionality reduction Numerousdimensionality reduction methods have been put forwardto find projections that better separate the classes in low-dimensional spaces among which linear subspace analysismethod has received more and more attention owing toits good properties including principal component analysis(PCA) [28] linear discriminant analysis (LDA) [29] andindependent component correlation algorithm (ICA) [30] Alot of works showed that PCA has the optimal performancein FR [31ndash33]

In this paper we first attempted to alleviate the influenceof unreliability environment on small sample size in FRtherefore we proposed to use Gabor feature and applied itto PCRC Then to improve the computational efficiency ofGPCRC we proposed to use PCA for dimension reductionand then use the measurement matrix including RandomGaussian matrices [34] Toplitz and cyclic matrices [35] andDeterministic sparse Toeplitz matrices (Δ = 4 8 16) [36]to reduce the dimension of the transformed signal and usedthe low-dimensional data to accurately represent the faceThe experimental results showed that the GPCRC and itsimproved methods are effective

Section 2 briefly reviewed SRC and CRC Section 3described the proposed GPCRC and its improved methodsSection 4 illustrated the experiments and the results AndSection 5 concluded the paper

2 Related Works

21 Sparse Representation Based Classification Recently SRCwas first reported by Wright et al [13] for robust FR In SRClet x119894 isin R119898times119899 denote the 119894-th face dataset and each columnof x119894 is a sample of the face of the 119894-th individual Assumingthere are 119896 classes of face samples let X = [x1 x2 x119896]When identifying a target face test sample y asymp X is usedfor coding = [1 2 119896] and the coefficient vector 119894is the encoding of the 119894-th individual sample If y is from the119894-th class then y asymp x119894119894 is the best reservation which meansthat a large number of coefficients are close to zero in 119896 (119896 =119894) only 119894 remains intact Thus the classification (ID) of thetarget face test sample y can be decoded by the sparse nonzerocoefficient in

The SRC methods [13] are summarized as follows

Step 1 Given 119896-class face training sample X and test sampley

Step 2 (dimension adjustment) X y are projected onto thecorresponding low-dimensional feature space X y using thetraditional dimensionality (PCA) reduction technique

Step 3 (1198972-norm) Obtain the normalized X column and yrespectively

Step 4 Solve the 1198971-minimization problem() = argmin1205721 satisfies X120572 = y or X120572 minus y2 lt 120576

Step 5 Compute the residuals to identify the following119903119894(y) = yminus x1198941198942 ID(y) = argmin119894[119903119894(y)] 119894 = 1 2 11989622 Collaborative Representation Based Classification Zhanget al [15] argued that it is the CR but not the 1198971-normsparsity that makes SRC powerful for face classificationCollaborative representation uses all classes (individuals) oftraining samples to label the target test samples In order toreduce the complexity of face detection by X-coordinate aregularized least-squares method is proposed by Zhang et alwhich is

() = arg min120588

1003817100381710038171003817y minus X sdot 120588100381710038171003817100381722 + 120582 1003817100381710038171003817120588100381710038171003817100381722 (1)

where 120582 is a regularization parameter Equation (1) has dualroles first of which is stabilizing the least-squares methodand secondly it proposes a ldquosparsityrdquo which is much weakerthan 1198971-norm to solve The CR for the regularized least-squares method of (1) can be solved as follows

= (X119879X + 120582 sdot I)minus1 X119879y (2)

Let P = (X119879X + 120582 sdot I)minus1X119879 Obviously since P and y havelittle relevance they can be precalculated as a projectionmatrix When a target test sample y comes in to be identifiedy is projected onto P by Py thus making CR very fastThe classification by is very similar to the classificationby in the SRC method In addition to representing theclassification residuals y minus x1198941198942 119894 is a coefficient vector

Mathematical Problems in Engineering 3

Test sample y

Patches

Col

labo

rativ

e rep

rese

ntat

ion

y1

y2

yj

ys

middotmiddotmiddot

j=

LAGCH

儩 儩 儩 儩 儩yjminusM

j j

2 2+

j 2 2

Figure 1 Patch based collaborative representation

associated with class 119894 the 1198972-norm ldquosparsityrdquo 1198942 alsocontains abundant information for classification The CRCmethod [15] is summarized as follows

Step 1 Give 119896 classes of face training sampleX and test sampley

Step 2 (reduce the dimension) Use PCA to reduce X and yto low-dimensional feature space and obtain X and y

Step 3 (normalization) Normalize the columns of X and yusing the unit 1198972-norm

Step 4 Encode y on XP = (X119879X + 120582 sdot I)minus1X119879 = Py

Step 5 Compute the regularization for identification119903119894(y) = y minus x11989411989421198942 ID(y) = argmin119894[119903119894(y)] 119894 =1 2 11989623 Patch Based Collaborative Representation In the equa-tion of sparse representation and collaborative representa-tion it can be seen that if the linear system determinedby the training dictionary X is underdetermined the linearrepresentation of the target test sample y over X can be veryaccurate but in reality available samples in each target arelimited the sparse representation and the cooperative repre-sentation method may fail because the linear representationof the target test sample may not be accurate enough Inorder to alleviate this problem Zhu et al proposed a PCRCmethod for FR [16] as shown in Figure 1 the target facetest sample y is divided into a set of overlapping face imagepatches y1 y2 y119904 according to the patch size Each of thedivided face image patches y119895 is collaboratively representedon the local dictionary M119895 at the corresponding positionof the patch y119895 extracted from X In this case since thelinear system determined by the local dictionary M119895 is oftenunderdetermined the patch based representation is moreaccurate than the overall face image representation

3 Patch Based Collaborative RepresentationUsing Gabor Feature and MeasurementMatrix for Face Recognition

31 Gabor Feature Gabor feature has been widely used inFR because of its robustness in illumination expression andpose compared to holistic feature Yang and Zhang haveapplied multiscale and multidirectional Gabor dictionary tothe SRC for FR [19] which further improves the robustness ofthe algorithm Inspired by previous works in this paper weintegrate Gabor feature into the PCRC framework to improveits robustness

A Gabor filter with multidirection 120583 and multiscale ] isdefined as follows [25 26]

Ψ120583] (z) =10038171003817100381710038171003817k120583]10038171003817100381710038171003817

2

1205752 119890(minusk120583]2z221205752) [119890119894k120583]119911 minus 119890minus12057522]

k120583] = k] sdot 119890119894120595120583 120595120583 = 1205871205838 k] = 119896max

119891] (3)

where the coordinates of the pixel are z = (119909 119910) 119896max isthe maximum frequency and the interval factor of the kernelfunction distance is denoted as 119891 The bandwidth of the filteris determined by 120575 The convolution of the target image Imgand the wavelet kernelΨ120583] is expressed as

G120583] = Img (z) lowastΨ120583] (z) = M120583] sdot exp (119894120579120583] (z)) (4)

where M120583] (120583 = 0 1 7 ] = 0 1 4) is the amplitudeof Gabor 120579120583](119911) is the phase of Gabor and the local energychange in the image is expressed by amplitude informationBecause the Gabor phase changes periodically with the spaceposition and the amplitude is relatively smooth and stable [1925 26] only the magnitude of Gabor was used in this papersuch as Figure 2

32 Measurement Matrix Unfortunately although Gaborfeature can be used to enhance the robustness of faceimage representation it brings higher dimensions to thetraining sets than holistic feature does In other words thecomputation cost and computation time are increased Inorder to solve the problem caused by higher dimension ofthe training sets a further dimension reduction is necessaryWe proposed to use PCA [31ndash33] for our method The stepsof PCA are as follows assuming 119873 sample images x119894 isin R119898119894 = 1 2 119873 Firstly normalize each sample (subtractthe mean and then divide the variance) convert vector119909lowast119894 which accord with normal distribution (0 1) Secondlycompute the eigenvectors of covariance matrix 119876 = 119883119883119879119883 = [119909lowast1 119909lowast2 119909lowast119873] 120582119864 = 119876119864 119864 is the eigenvectorcorresponding to the eigenvalue 120582 Thirdly the eigenvectoris sorted according to the size of the eigenvalues the first119901 eigenvectors are extracted to form a linear transformationmatrix 119882PCA isin R119898times119901 We can use 119910119894 = 119882119879PCA119909119894 to reducedimension But if 119898 ≫ 119873 the dimension of 119883119883119879 isinR119898times119898 would be very large In order to solve this problemsingular value decomposition is usually usedThe eigenvalues

4 Mathematical Problems in Engineering

Gabor feature

Magnitude featurePatches

middotmiddotmiddot

Figure 2 Patch based collaborative representation using Gabor feature

120582 and eigenvectors 120583119894 of 119883119883119879 are obtained by calculatingthe eigenvalues 120575119894 and eigenvectors ]119894 of 119883119879119883 (the first 119901)120583119894 = (1radic120575119894)119883]119894 119894 = 1 2 119901

In summary we found that PCA and its related algo-rithms have two obvious shortcomings [37 38] Firstly theleading eigenvectors encode mostly illumination and expres-sion rather than discriminating information Secondly in theactual calculation the amount of calculation is very large andwill fail in small samples

Inspired by the method of using random face for featureextraction illustrated in this literature [38] we used theRandom Gaussian matrices Φ isin R119872times119873 (119872 ≪ 119873) as a mea-surement matrix to measure face images The measurementmatrix Φ is used to measure the redundant dictionary X toobtain 119860 = ΦX isin R119872times119878 where X = [1199091 1199092 119909119878] isinR119873times119878 In Figure 3(a) for any test image 119909 measurements 119910were obtained by 119910 = Φ119909 isin R119872times1 In essence utilizingthe measurement matrix to reduce the dimension of theimage is different from the sparse representation theory Thedimension 119872 of the measurements 119910 was measured by themeasurement matrix Φ and was not limited by the numberof training samples

However some literatures [35 39] suggested that theRandom Gaussian matrices are uncertain and limit itspractical application And Toplitz and cyclic matrices wereproposed for signal reconstruction The Toplitz and cyclicmatrices rotate the row vectors to generate all matricesUsually the value of the vector in Toplitz and cyclic matricesis plusmn1 and each element is independent of the Bernoullidistribution Therefore it is easy to implement hardware inpractical application Based on the above analysis we furtherused Toplitz and cyclic matrices and their improved method(namely the Deterministic sparse Toeplitz matrices [36]) forour method Some of the relevant measurement matricesused in this paper will be described in detail The operat-ing mechanism of each measurement matrix is shown inFigure 3

321 Random Gaussian Matrices The format of 119898 times 119899Random Gaussian matrices is expressed as follows [34 38]

[[[[[[[

11988611 11988612 sdot sdot sdot 119886111989911988621 11988622 sdot sdot sdot 1198862119899 d

1198861198981 1198861198982 sdot sdot sdot 119886119898119899

]]]]]]]

(5)

each element 119886119894119895 is independently subject to a Gaussiandistribution whose mean is 0 and variance is 1119898

322 Toplitz andCyclicMatrices Theconcrete formof119872times119873Toplitz and cyclic matrices [35 39] is presented below

[[[[[[[

119886119873 119886119873minus1 sdot sdot sdot 1198861119886119873+1 119886119873 sdot sdot sdot 1198862 d

119886119873+119872minus1 119886119873+119872minus2 sdot sdot sdot 119886119872

]]]]]]]

(6)

[[[[[[[

119886119873 119886119873minus1 sdot sdot sdot 11988611198861 119886119873 sdot sdot sdot 1198862 d

119886119872minus1 119886119872minus2 sdot sdot sdot 119886119872

]]]]]]]

(7)

Equation (6) is the Toplitz matrices the main diagonal 119886119894119895 =119886119894+1119895+1 are constants If 119886119894 = 119886119873+119894 the additional conditionof (6) is satisfied then it becomes a cyclic matrices and itselement 119886119894119873+119872minus1119894=1 follows a certain probability distribution119875(119886)323 Deterministic Sparse Toeplitz Matrices The construc-tion of Deterministic sparse Toeplitz matrices [36] is basedon the Toplitz and cyclic matrices which is illustrated in thispaper by the example of the random spacing Toplitz matriceswith an interval of Δ = 2 The independent elements in thefirst row and the first column of (6) constitute the vector T

[119886119873 119886119873minus1 sdot sdot sdot 1198861 119886119873+1 sdot sdot sdot 119886119873+119872minus1] (8)Conducting random sparse spacing to (7) one can seethat 119879 contains all the independent elements in (8) Then

Mathematical Problems in Engineering 5

y x

measurements

Φ

Mtimes 1

Mtimes N

=

(a) The linear measurement process of measurementmatrix

The gabor feature of an image patch(N dimension)

Random Gaussian matrices

Toeplitz and cyclic matrices

Deterministic sparse Toeplitz

Deterministic sparse Toeplitz

Deterministic sparse Toeplitz

(Mtimes N)

(Mtimes N)

(Mtimes N)

(Mtimes N)

(Mtimes N)

matrices (Δ = 4)

matrices (Δ = 8)

matrices (Δ = 16)

(b) The structure of the measurement matrix

Figure 3 The operating mechanism of measurement matrix

assignment operations are performed on 119879 where the ele-ments 119886119894 (119894 isin Λ Λ is the (119873 + 119872 minus 1)Δ indexesrandomly chosen from the index sequence 1 rarr 119873 + 119872 minus1) obey the independent and identically distributed (iid)Random Gaussian distribution while the other elements are0 Finally we obtained the Deterministic sparse Toeplitzmatrices 119886119894+1119895+1 = 119886119894119895 according to the characteristics ofconstruction of the Toplitz and cyclic matrices

33 The Proposed Face Recognition Approach Although thePCRC can indeed solve the problem of small sample size thismethod is still based on the original feature of the patch andthe robustness and accuracy are yet to be improved Based onthe above analysis Gabor feature and measurement matrixare infused into the PCRC for FR which not only solves theproblemof small sample size but also enhances the robustnessand efficiency of the method

Our proposed method for FR is summarized as follows

Step 1 (input) Face training sample X and test sample y

Step 2 (patch) Divide the 119896 face training samples X into 119904patches and divide the test sample y into 119904 patches where y119894andM119894119895 are the position corresponding to the sample patches

X =[[[[[[[[[[[

M1M119894M119904

]]]]]]]]]]]

=

[[[[[[[[[[[[

M11 sdot sdot sdot M1119895 sdot sdot sdot M1119896

M1198941 sdot sdot sdot M119894119895 sdot sdot sdot M119894119896

M1199041 sdot sdot sdot M119904119895 sdot sdot sdot M119904119896

]]]]]]]]]]]]

y =[[[[[[[[[[[

y1y119894y119904

]]]]]]]]]]]

(9)

6 Mathematical Problems in Engineering

Step 3 (extract and measure features) (1) Extract Gaborfeature from patches of training samples and test samplerespectively to obtain

G =[[[[[[[[[[[

G1G119894G119904

]]]]]]]]]]]

=

[[[[[[[[[[[[

G11 sdot sdot sdot G1119895 sdot sdot sdot G1119896

G1198941 sdot sdot sdot G119894119895 sdot sdot sdot G119894119896

G1199041 sdot sdot sdot G119904119895 sdot sdot sdot G119904119896

]]]]]]]]]]]]

Gy =[[[[[[[[[[[

Gy1

Gy119894

Gy119904

]]]]]]]]]]]

(10)

(2) Carry out themeasurement and reduce the dimensionof each patch G119894119895 (119894 = 1 2 119904 119895 = 1 2 119896) using themeasurement matrixΦ to obtain measured signal

A =[[[[[[[[[[[

A1A119894A119904

]]]]]]]]]]]

=

[[[[[[[[[[[[

a11 sdot sdot sdot a1119895 sdot sdot sdot a1119896 a1198941 sdot sdot sdot a119894119895 sdot sdot sdot a119894119896 a1199041 sdot sdot sdot a119904119895 sdot sdot sdot a119904119896

]]]]]]]]]]]]

=

[[[[[[[[[[[[

ΦG11 sdot sdot sdot ΦG1119895 sdot sdot sdot ΦG1119896 ΦG1198941 sdot sdot sdot ΦG119894119895 sdot sdot sdot ΦG119894119896 ΦG1199041 sdot sdot sdot ΦG119904119895 sdot sdot sdot ΦG119904119896

]]]]]]]]]]]]

B =[[[[[[[[[[[

b1b119894b119904

]]]]]]]]]]]

=[[[[[[[[[[[

ΦGy1ΦGy119894ΦGy119904

]]]]]]]]]]]

(11)

Step 4 Collaborative representation of each patch measure-ment signal bi of the test sample over the training samplemeasurement signal local dictionary Ai

119894 = argmin120588119894

1003817100381710038171003817b119894 minus A119894120588119894100381710038171003817100381722 + 120582 1003817100381710038171003817120588119894100381710038171003817100381722 119894 = 1 2 119904 (12)

Equation (12) can be easily derived as follows

119894 = Pb119894 P = (A119879119894 A119894 + 120582I)minus1 A119879119894 119894 = 1 2 119904 (13)

where I is unit matrix

Step 5 Recognition results of patch y119894 are

119903119894119895 (yi) =10038171003817100381710038171003817bi minus aij119894

100381710038171003817100381710038172100381710038171003817100381711989410038171003817100381710038172119894 = 1 2 119904 119895 = 1 2 119896

ID (119903119894119895) = argmin119895

[119903119894119895 (yi)] (14)

Step 6 Use voting to obtain the final recognition result

4 Experimental Analysis

The proposed approaches GPCRC and its improved meth-ods were evaluated in three publicly available databasesExtended Yale B [40ndash42] CMU PIE [43 44] and LFW [4546] To show the effectiveness of the GPCRC we comparedour methods with four classical methods and their improvedmethods namely SRC [13] CRC [15] SSRC [14] and PCRC[16]Then we tested our GPCRC using several measurementmatrix namely Random Gaussian matrices Toeplitz andcyclic matrices and Deterministic sparse Toeplitz matrices(Δ = 4 8 16) to reduce the dimension of the transformedsignal and compare their performance with the conventionalPCA methods In all the following experiments we ran theMATLAB 2015b on a typical Intel(R) Core(TM) i5-3470CPU 320GHzWindows7 x64 PC In our implementation ofGabor filters the parameters were set as 119891 = radic2 119870max =1205872 120590 = 120587 120583 = 0 1 7 ] = 0 1 4 for all theexperiments below All the programs were run 20 times oneach database and the mean and variance of each methodwere reported In order to report the best result of eachmethod the parameter120582used in SRC SSRCCRC andPCRCwas set to 0001 [13] 0001 [14] 0005 [15] and 0001 [16 17]respectively The patch sizes were used in PCRC and GPCRCand its improved methods are 8 times 8 [16 17] and 16 times 16respectively

41 Extended Yale B To test the robustness of the proposedmethod on illumination we used the classic Extended YaleB database [40ndash42] because faces from Extended Yale Bdatabase were acquired in different illumination conditionsThe Extended Yale B database contains 38 human subjectsunder 9 poses and 64 illumination conditions For obviouscomparison all frontal-face images marked with P00 wereused in our experiment and the face images size was down-sampled to 32 times 32 10 20 faces from each individual wereselected randomly as training sample 30 others from eachindividual were selected as test sample Figure 4 shows someP00 marked samples from the Extended Yale B databaseThe experimental results of each method are shown inTable 1 In Figure 5 we compared the recognition rates of

Mathematical Problems in Engineering 7

Table 1 Recognition rate () on Extended Yale B database

Features Methods Training sample size10 20

Intensity feature

SRC 7615 plusmn 960 8050 plusmn 432CRC 7860 plusmn 1058 8300 plusmn 664SSRC 8753 plusmn 791 9001 plusmn 438PCRC 9044 plusmn 109 9425 plusmn 101

LBP feature

SRC 8356 plusmn 1132 8974 plusmn 952CRC 8302 plusmn 1019 9030 plusmn 327SSRC 9058 plusmn 639 9589 plusmn 301PCRC 7967 plusmn 1257 8725 plusmn 508

Gabor feature

SRC 8525 plusmn 536 9150 plusmn 350CRC 8618 plusmn 443 9000 plusmn 280SSRC 8706 plusmn 949 9224 plusmn 506PCRC 9434 plusmn 206 9883 plusmn 094

Figure 4 Some marked samples of the Extended Yale B database

Feature Dimension

101520253035404550556065707580859095

100

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(a) 10 training samples

Reco

gniti

on ra

te (

)

Feature Dimension

05

101520253035404550556065707580859095

100

0 100 200 300 400 500 600 700 800 900 10001100

Random Gaussian matricesToeplitz and cyclic matrices

PCA

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(b) 20 training samples

Figure 5 The performance of each dimension reduction method for GPCRC on Extended Yale B database

8 Mathematical Problems in Engineering

Table 2 The speed of each dimension reduction methods

Method Random Gaussian Toeplitz and cyclic PCATime(s) 11131 10836 417295Method Deterministic sparse Toeplitz

(Δ = 4) (Δ = 8) (Δ = 16)Time(s) 12643 12226 12678

Figure 6 Some marked samples of the CMU PIE database

various methods with the space dimensions 32 64 128256 512 and 1024 for each patch in GPCRC and thosenumbers correspond to downsampling ratios of 1320 1160180 140 120 and 110 respectively In Table 1 it can beclearly seen that GPCRC achieves the highest recognitionrate performance in all experiments In Figure 6 we cansee the performance of various dimensionality reductionmethods for GPCRC 10 samples of each individual inFigure 5(a) were used as training samples When the featuredimension is low (le256) the best performance of PCA is9280 (256 dimension) which is significantly higher thanthat of the measurement matrix at 256 dimension RandomGaussian matrices are 8877 Toeplitz and cyclic matricesare 8517 Deterministic sparse Toeplitz matrices (Δ = 4)are 8667 Deterministic sparse Toeplitz matrices (Δ = 8)are 8569 and Deterministic sparse Toeplitz matrices (Δ =16) are 8605 but none of these has been as good as theperformance of the original dimension (10240 dimension)So the reason why the performance of PCA is significantlybetter than the measurement matrix at low dimension canbe analyzed through their operation mechanism the PCAtransforms the original data into a set of linearly independentrepresentations of each dimension in a linear transformationwhich can extract the principal component of the dataThese principal components can represent image signalsmore accurately [31ndash33] The theory of compressed sensing[34ndash36] states that when the signal is sparse or compressiblein a transform domain the measurement matrix whichis noncoherent with the transform matrix can be used toproject transform coefficients to the low-dimensional vec-tor and this projection can maintain the information forsignal reconstruction The compressed sensing technologycan achieve the reconstruction with high accuracy or highprobability using small number of projection data In the caseof extremely low dimension the reconstruction informationis insufficient and the original signal cannot be reconstructedaccurately So the recognition rate is not high When thedimension is increased and the reconstructed informationcan more accurately reconstruct the original signal the

recognition rate will improve When the feature dimension ishigher (ge512) PCA cannot work because of the sample sizeAt 512 dimension the measurement matrix (Deterministicsparse Toeplitz matrices (Δ = 8) are 9521 Deterministicsparse Toeplitz matrices (Δ = 16) are 9491) has reached theperformance of the original dimension (10240 dimension)and the dimension is only 120 of the original dimension Atthe 110 of the original dimension the performance of themeasurement matrix has basically achieved the performanceof the original dimension Random Gaussian matrices are9560 Deterministic sparse Toeplitz matrices (Δ = 4) are9464 Deterministic sparse Toeplitz matrices (Δ = 8) are9456 and Deterministic sparse Toeplitz matrices (Δ = 16)are 9498 In Figure 6(b) 20 samples of each individualwere used as training samples When the feature dimensionle 512 the best performance is PCA which is 9750 (512dimension) At 1024 dimension PCA cannot work andthe performance of the measurement matrix has basicallyachieved the performance of the original dimension RandomGaussian matrices are 9880 Deterministic sparse Toeplitzmatrices (Δ = 4) are 9868 Deterministic sparse Toeplitzmatrices (Δ = 8) are 9901 and Deterministic sparseToeplitzmatrices (Δ = 16) are 9877 In Table 2 we fixed thedimension of each reductionmethods at 512 dimension in thecase of 20 training samples In general the complexity of PCAis119874(1198993) [47] where 119899 is the number of rows of the covariancematrix The example we provided consists of 38 individualsand each individual contains 20 samples Since the numberof Gabor features (10240) for each patch far outweighs thetotal number of training samples (760) the number of rowsof the covariancematrix is determined by the total number oftraining samples and the complexity of PCA is approximately119874(7603) However in the proposed measurement matrixalgorithm the measurement matrix is formed by a certaindeterministic distribution (eg Gaussian distribution) andstructure according to the required dimension and length ofthe signal so that the measurement matrix demonstrates lowcomplexity [36 48 49] The actual speed of each dimensionreduction methods for one patch of all samples (including alltraining samples and all testing samples) is listed Based onthe above analysis we can see that the PCA method has themost outstanding performance but it is limited by the samplesize and it is time-consumingThemeasurement matrix doesnot perform well in low dimension but is not limited by thesample size When the dimension reaches a certain value itsperformance was the same as that of the original dimensionThus the dimension of the data can be reduced without lossof recognition rate

Mathematical Problems in Engineering 9

15

20

25

30

35

40

45

50

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(a) 2 training samples

10

15

20

25

30

35

40

45

50

55

60

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 10001100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(b) 3 training samples

05

10152025303540455055606570

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 10001100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(c) 4 training samples

05

101520253035404550556065707580

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(d) 5 training samples

Figure 7 The performance of each dimension reduction method for GPCRC on CMU PIE database

42 CMU PIE In order to further test the robustness inillumination pose and expression we utilized a currentlypopular database CMU PIE [43 44] a database consistingof 41368 images of 68 people for each person there are 13different poses 43 different illumination conditions and 4different expressions To testify the advantage of our methodin small sample size condition we randomly selected 2 3 4

and 5 face samples from each individual as training sampleand other 40 face samples from each individual as test sam-plesThe face images size is downsampled to 32times32 Figure 6shows some marked samples of the CMU PIE database Theexperimental results of each method are shown in Table 3GPCRC has the best results In Figure 7 we compared therecognition rates of various dimension reduction methods

10 Mathematical Problems in Engineering

Table 3 Recognition rate () on CMU PIE database

Features Method Training sample size2 3 4 5

Intensity feature

SRC 2821 plusmn 543 3936 plusmn 982 4473 plusmn 1529 6054 plusmn 645CRC 3034 plusmn 615 4062 plusmn 920 4435 plusmn 742 6103 plusmn 782SSRC 3752 plusmn 834 5474 plusmn 850 5732 plusmn 982 6673 plusmn 885PCRC 4232 plusmn 117 5646 plusmn 188 6116 plusmn 291 7047 plusmn 387

LBP feature

SRC 3842 plusmn 646 5337 plusmn 710 5367 plusmn 521 6331 plusmn 724CRC 3732 plusmn 775 5483 plusmn 615 5427 plusmn 831 6425 plusmn 690SSRC 4317 plusmn 571 5684 plusmn 921 6453 plusmn 476 6825 plusmn 435PCRC 4121 plusmn 710 5221 plusmn 1176 6254 plusmn 621 7022 plusmn 307

Gabor feature

SRC 4143 plusmn 420 5336 plusmn 540 5358 plusmn 732 6679 plusmn 545CRC 4025 plusmn 515 5535 plusmn 328 5668 plusmn 742 6624 plusmn 618SSRC 4455 plusmn 600 5596 plusmn 620 6250 plusmn 316 7041 plusmn 479PCRC 4857 plusmn 129 5845 plusmn 194 6882 plusmn 256 7318 plusmn 244

for GPCRC Similarly we can see that PCA cannot achievethe best recognition rate because of the sample size limitand at 1024 dimension the performance of the measurementmatrix has basically achieved the performance of the originaldimensionThe recognition rate of eachmeasurementmatrixis shown below

(i) Random Gaussian matrices are 4873 (2 trainingsamples) 5966 (3 training samples) 6886 (4training samples) and 7360 (5 training samples)

(ii) Toeplitz and cyclic matrices are 4889 (2 trainingsamples) 5830 (3 training samples) 6544 (4training samples) and 7312 (5 training samples)

(iii) Deterministic sparse Toeplitz matrices (Δ = 4)are 4763 (2 training samples) 5868 (3 trainingsamples) 6703 (4 training samples) and 7178 (5training samples)

(iv) Deterministic sparse Toeplitz matrices (Δ = 8)are 4869 (2 training samples) 5855 (3 trainingsamples) 6891 (4 training samples) and 7427 (5training samples)

(v) Deterministic sparse Toeplitz matrices (Δ = 16)are 4694 (2 training samples) 6000 (3 trainingsamples) 6625 (4 training samples) and 7169 (5training samples)

Through the above analysis we further validated the feasibil-ity of the measurement matrix in the premise of ensuring therecognition rate

43 LFW In practice the number of training samples is verylimited and only one or two can be obtained from individualidentity documents To simulate the actual situationwe selectthe LFWdatabaseThe LFWdatabase includes frontal imagesof 5749 different subjects in unconstrained environment[45] LFW-a is a version of LFW after alignment using com-mercial face alignment software [46] We chose 158 subjectsfrom LFW-a each subject contains no less than ten samplesFor each subject we randomly choose 1 to 2 samples fromeach individual for training and another 5 samples from each

Figure 8 Some marked samples of the LFW database

individual for testing The face images size is downsampledto 32 times 32 Figure 8 shows some marked samples of the LFWdatabaseThe experimental results of eachmethod are shownin Table 4 it can be clearly seen that the GPCRC achievesthe highest recognition rate performance in all experimentswith the training sample size from 1 to 2 In Figure 9 wecan also see the advantages of the measurement matrix insmall sample size At 1024 dimension the performance ofthe measurement matrix in the single training sample isas follows Random Gaussian matrices are 2618 Toeplitzand cyclic matrices are 2468 Deterministic sparse Toeplitzmatrices (Δ = 4) are 2443 Deterministic sparse Toeplitzmatrices (Δ = 8) are 2585 and Deterministic sparseToeplitz matrices (Δ = 16) are 2528 and the performancesof the measurement matrix in the two training samples areas follows Random Gaussian matrices are 3968 Toeplitzand cyclic matrices are 3772 Deterministic sparse Toeplitzmatrices (Δ = 4) are 3756 Deterministic sparse Toeplitzmatrices (Δ = 8) are 3991 and Deterministic sparseToeplitz matrices (Δ = 16) are 3825

5 Conclusion

In order to alleviate the influence of unreliability environmenton small sample size in FR in this paper we applied improvedmethod for Gabor local features to PCRC we proposedto use the measurement matrices to reduce the dimensionof the transformed signal Several important observationscan be summarized as follows (1) The proposed GPCPCmethod can effectively deal with the influence of unreliabilityenvironment in small sample FR (2) The measurement

Mathematical Problems in Engineering 11

Table 4 Recognition rate () on LFW database

Features Method Training sample size1 2

Intensity feature

SRC 1058 plusmn 720 1902 plusmn 829CRC 908 plusmn 815 2039 plusmn 720SSRC 1703 plusmn 587 2852 plusmn 750PCRC 2229 plusmn 396 3329 plusmn 528

LBP feature

SRC 1552 plusmn 554 2531 plusmn 742CRC 1623 plusmn 621 2622 plusmn 657SSRC 1916 plusmn 341 3210 plusmn 426PCRC 2126 plusmn 473 3159 plusmn 611

Gabor feature

SRC 1868 plusmn 423 3135 plusmn 640CRC 1731 plusmn 815 3083 plusmn 520SSRC 2130 plusmn 721 3367 plusmn 433PCRC 2582 plusmn 342 3960 plusmn 354

0

5

10

15

20

25

30

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(a) Single training sample

0

5

10

15

20

25

30

35

40

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 10001100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(b) 2 training samples

Figure 9 The performance of each dimension reduction method for GPCRC on LFW database

matrix proposed to deal with the high dimension in GPCRCmethod can effectively improve the computational efficiencyand computational speed and they were able to overcome thelimitations of the PCA method

Conflicts of Interest

None of the authors have conflicts of interest

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China (no 61672386) Humanities and Social

Sciences Planning Project of Ministry of Education (no16YJAZH071) Anhui Provincial Natural Science Foundationof China (no 1708085MF142) University Natural ScienceResearch Project of Anhui Province (no KJ2017A259) andProvincial Quality Project of Anhui Province (no 2016zy131)

References

[1] J Galbally S Marcel and J Fierrez ldquoBiometric antispoofingmethods a survey in face recognitionrdquo IEEE Access vol 2 pp1530ndash1552 2014

[2] S Nagpal M Singh R Singh and M Vatsa ldquoRegularizeddeep learning for face recognition with weight variationsrdquo IEEEAccess vol 3 pp 3010ndash3018 2015

12 Mathematical Problems in Engineering

[3] J W Tanaka and D Simonyi ldquoThe parts and wholes of facerecognition a review of the literaturerdquoThe Quarterly Journal ofExperimental Psychology no 10 pp 1ndash37 2016

[4] W Yang Z Wang and C Sun ldquoA collaborative representationbased projectionsmethod for feature extractionrdquoPattern Recog-nition vol 48 no 1 pp 20ndash27 2015

[5] O Boiman E Shechtman and M Irani ldquoIn defense of nearest-neighbor based image classificationrdquo in Proceedings of the 26thIEEE Conference on Computer Vision and Pattern Recognition(CVPR rsquo08) pp 1ndash8 June 2008

[6] K VenkataNarayana V Manoj and K KSwathi ldquoEnhancedface recognition based on PCA and SVMrdquo International Journalof Computer Applications vol 117 no 2 pp 40ndash42 2015

[7] E Owusu and Q R Mao ldquoAn svm cadaboost-based face detec-tion systemrdquo Journal of Experimental amp Theoretical ArtificialIntelligence vol 26 no 4 pp 477ndash491 2014

[8] B V Dasarathy ldquoNearest neighbor (NN) norms NN patternclassification techniquesrdquo Los Alamitos IEEE Computer SocietyPress vol 13 no 100 pp 21ndash27 1991

[9] Y Liu S S Ge C Li and Z You ldquo120581-NS a classifier by thedistance to the nearest subspacerdquo IEEE Transactions on NeuralNetworks and Learning Systems vol 22 no 8 pp 1256ndash12682011

[10] J Hamm and D D Lee ldquoGrassmann discriminant analysis aunifying view on subspace-based learningrdquo in Proceedings ofthe 25th International Conference onMachine Learning pp 376ndash383 July 2008

[11] R Wang S Shan X Chen and W Gao ldquoManifold-manifolddistance with application to face recognition based on imagesetrdquo in Proceedings of the 26th IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo08) pp 1ndash8 June 2008

[12] J Chien and C Wu ldquoDiscriminant waveletfaces and nearestfeature classifiers for face recognitionrdquo IEEE Transactions onPattern Analysis and Machine Intelligence vol 24 no 12 pp1644ndash1649 2002

[13] J Wright A Ganesh Z Zhou A Wagner and Y Ma ldquoDemorobust face recognition via sparse representationrdquo in Proceed-ings of the 8th IEEE International Conference on Automatic Faceamp Gesture Recognition (FG rsquo08) pp 1-2 IEEE Amsterdam TheNetherlands September 2008

[14] W Deng J Hu and J Guo ldquoIn defense of sparsity based facerecognitionrdquo in Proceedings of the 26th IEEE Conference onComputer Vision and Pattern Recognition (CVPR rsquo13) pp 399ndash406 June 2013

[15] L Zhang M Yang and X Feng ldquoSparse representation orcollaborative representation Which helps face recognitionrdquo inProceedings of the IEEE International Conference on ComputerVision (ICCV rsquo11) pp 471ndash478 Barcelona Spain November2011

[16] P Zhu L Zhang Q Hu and S C K Shiu ldquoMulti-scale patchbased collaborative representation for face recognition withmargin distribution optimizationrdquo in European Conference onComputer Vision vol 7572 of Lecture Notes in Computer Sciencepp 822ndash835 2012

[17] D Yang W Chen J Wang and Y Xu ldquoPatch based face recog-nition via fast collaborative representation based classificationand expression insensitive two-stage votingrdquo International Con-ference on Computational Science and Its Applications vol 9787pp 562ndash570 2016

[18] J Lai and X Jiang ldquoClasswise sparse and collaborative patchrepresentation for face recognitionrdquo IEEE Transactions onImage Processing vol 25 no 7 pp 3261ndash3272 2016

[19] M Yang and L Zhang ldquoGabor feature based sparse represen-tation for face recognition with gabor occlusion dictionaryrdquo inProceedings of the 11th European Conference on Computer Vision(ECCV rsquo10) pp 448ndash461 Springer Crete Greece 2010

[20] M R D Rodavia O Bernaldez and M Ballita ldquoWeb andmobile based facial recognition security system using Eigen-faces algorithmrdquo in Proceedings of the 2016 IEEE InternationalConference on Teaching Assessment and Learning for Engineer-ing (TALE rsquo16) pp 86ndash92 Bangkok Thailand December 2016

[21] P N Belhumeur J P Hespanha and D J Kriegman ldquoEigen-faces vs fisherfaces recognition using class specific linearprojectionrdquo IEEE Transactions on Pattern Analysis andMachineIntelligence vol 19 no 7 pp 711ndash720 1997

[22] L P Yang W G Gong W H Li and X H Gu ldquoRandomsampling subspace locality preserving projection for face recog-nitionrdquo Optics amp Precision Engineering vol 16 no 8 pp 1465ndash1470 2008

[23] X Sun M Chen and A Hauptmann ldquoAction recognition vialocal descriptors andholistic featuresrdquo inProceedings of the 2009IEEE Conference on Computer Vision and Pattern Recognition(CVPR rsquo09) pp 58ndash65 June 2009

[24] X Wang Y Zhang X Mu and F S Zhang ldquoThe facerecognition algorithm based on improved lbprdquo InternationalJournal of Optoelectronic Engineering vol 1 no 4 pp 76ndash802012

[25] J Ahmad U Ali and R J Qureshi ldquoFusion of thermal andvisual images for efficient face recognition using gabor filterrdquo inProceedings of the IEEE International Conference on ComputerSystems and Applications pp 135ndash139 March 2006

[26] V Struc R Gajsek and N Pavesic ldquoPrincipal gabor filters forface recognitionrdquo in Proceedings of the IEEE 3rd InternationalConference on Biometrics Theory Applications and Systems(BTAS rsquo09) pp 1ndash6 Washington DC USA September 2009

[27] M Li X Yu K H Ryu S Lee and N Theera-Umpon ldquoFacerecognition technology development with gabor PCA andSVM methodology under illumination normalization condi-tionrdquo Cluster Computing no 3 pp 1ndash10 2017

[28] J Yang D Zhang and A F Frangi ldquoTwo-dimensional PCAa new approach to apperence -based face representationand recognitionrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 26 no 1 pp 131ndash137 2004

[29] H Yu and J Yang ldquoA direct LDA algorithm for high-dimensional data with application to face recognitionrdquo PatternRecognition vol 34 no 10 pp 2067ndash2070 2001

[30] J Kim J Choi J Yi andMTurk ldquoEffective representation usingICA for face recognition robust to local distortion and partialocclusionrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 27 no 12 pp 1977ndash1981 2005

[31] C G Turhan andH S Bilge ldquoClass-wise two-dimensional PCAmethod for face recognitionrdquo IET Computer Vision vol 11 no4 pp 286ndash300 2017

[32] A Mashhoori and M Zolghadri Jahromi ldquoBlock-wise two-directional 2DPCA with ensemble learning for face recogni-tionrdquo Neurocomputing vol 108 pp 111ndash117 2013

[33] M P Rajath Kumar R Keerthi Sravan and K M AishwaryaldquoArtificial neural networks for face recognition using PCA andBPNNrdquo in Proceedings of the 35th IEEE Region 10 Conference(TENCON rsquo15) pp 1ndash6 IEEE Macao China November 2015

[34] E J Candes and T Tao ldquoNear-optimal signal recovery fromrandom projections universal encoding strategiesrdquo Institute ofElectrical and Electronics Engineers Transactions on InformationTheory vol 52 no 12 pp 5406ndash5425 2006

Mathematical Problems in Engineering 13

[35] W U Bajwa J D Haupt G M Raz S J Wright and R DNowak ldquoToeplitz-structured compressed sensing matricesrdquo inProceedings of the IEEESP 14th Workshop on Statistical SignalProcessing (SSP rsquo07) pp 294ndash298 IEEE Madison WI USAAugust 2007

[36] C Zhang H-R Yang and S Wei ldquoCompressive sensing basedon deterministic sparse toeplitz measurement matrices withrandom pitchrdquo Acta Automatica Sinica vol 38 no 8 pp 1362ndash1369 2012

[37] E S Takemura M R Petraglia and A Petraglia ldquoAn imagesuper-resolution algorithm based on wiener filtering andwavelet transformrdquo inProceedings of the 2013 IEEEDigital SignalProcessing and Signal Processing Education Meeting (DSPSPErsquo13) pp 130ndash134 August 2013

[38] M Yang Face Recognition via Sparse Representation WileyEncyclopedia of Electrical and Electronics Engineering 2008

[39] E J Candes J Romberg and T Tao ldquoRobust uncertaintyprinciples exact signal reconstruction from highly incompletefrequency informationrdquo Institute of Electrical and ElectronicsEngineers Transactions on InformationTheory vol 52 no 2 pp489ndash509 2006

[40] A S Georghiades P N Belhumeur and D J Kriegman ldquoFromfew to many illumination cone models for face recognitionunder variable lighting and poserdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 23 no 6 pp 643ndash6602001

[41] N McLaughlin J Ming and D Crookes ldquoLargest matchingareas for illumination and occlusion robust face recognitionrdquoIEEE Transactions on Cybernetics vol 47 no 3 pp 796ndash8082017

[42] Y Xu X Fang J Wu X Li and D Zhang ldquoDiscriminativetransfer subspace learning via low-rank and sparse representa-tionrdquo IEEE Transactions on Image Processing vol 25 no 2 pp850ndash863 2016

[43] H Roy and D Bhattacharjee ldquoLocal-gravity-face (lg-face) forillumination-invariant and heterogeneous face recognitionrdquoIEEE Transactions on Information Forensics and Security vol 11no 7 pp 1412ndash1424 2016

[44] Y Tai J Yang Y Zhang L Luo J Qian and Y ChenldquoFace recognition with pose variations and misalignment viaorthogonal procrustes regressionrdquo IEEE Transactions on ImageProcessing A Publication of the IEEE Signal Processing Societyvol 25 no 6 pp 2673ndash2683 2016

[45] G B Huang M Narayana and E Learned-Miller ldquoLabeledfaces in the wild a database forstudying face recognition inunconstrained environmentsrdquoMonth 2008

[46] LWolf T Hassner and Y Taigman ldquoSimilarity scores based onbackground samplesrdquo in Asian Conference on Computer Visionvol 5995 pp 88ndash97 2009

[47] J Wang Geometric Structure of High-Dimensional Data andDimensionality Reduction Springer 2012

[48] Q Wang J Li and Y Shen ldquoA survey on deterministicmeasurement matrix construction algorithms in compressivesensingrdquo Acta Electronica Sinica vol 41 no 10 pp 2041ndash20502013

[49] WYin SMorgan J Yang andY Zhang ldquoPractical compressivesensing with toeplitz and circulant matricesrdquo in Proceedingsof the Visual Communications and Image Processing 2010 vol7744 July 2010

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 3: SUBMITTED TO MATHEMATICAL PROBLEMS IN …downloads.hindawi.com/journals/mpe/aip/3025264.pdf · SUBMITTED TO MATHEMATICAL PROBLEMS IN ENGINEERING 1 Patch based Collaborative Representation

Mathematical Problems in Engineering 3

Test sample y

Patches

Col

labo

rativ

e rep

rese

ntat

ion

y1

y2

yj

ys

middotmiddotmiddot

j=

LAGCH

儩 儩 儩 儩 儩yjminusM

j j

2 2+

j 2 2

Figure 1 Patch based collaborative representation

associated with class 119894 the 1198972-norm ldquosparsityrdquo 1198942 alsocontains abundant information for classification The CRCmethod [15] is summarized as follows

Step 1 Give 119896 classes of face training sampleX and test sampley

Step 2 (reduce the dimension) Use PCA to reduce X and yto low-dimensional feature space and obtain X and y

Step 3 (normalization) Normalize the columns of X and yusing the unit 1198972-norm

Step 4 Encode y on XP = (X119879X + 120582 sdot I)minus1X119879 = Py

Step 5 Compute the regularization for identification119903119894(y) = y minus x11989411989421198942 ID(y) = argmin119894[119903119894(y)] 119894 =1 2 11989623 Patch Based Collaborative Representation In the equa-tion of sparse representation and collaborative representa-tion it can be seen that if the linear system determinedby the training dictionary X is underdetermined the linearrepresentation of the target test sample y over X can be veryaccurate but in reality available samples in each target arelimited the sparse representation and the cooperative repre-sentation method may fail because the linear representationof the target test sample may not be accurate enough Inorder to alleviate this problem Zhu et al proposed a PCRCmethod for FR [16] as shown in Figure 1 the target facetest sample y is divided into a set of overlapping face imagepatches y1 y2 y119904 according to the patch size Each of thedivided face image patches y119895 is collaboratively representedon the local dictionary M119895 at the corresponding positionof the patch y119895 extracted from X In this case since thelinear system determined by the local dictionary M119895 is oftenunderdetermined the patch based representation is moreaccurate than the overall face image representation

3 Patch Based Collaborative RepresentationUsing Gabor Feature and MeasurementMatrix for Face Recognition

31 Gabor Feature Gabor feature has been widely used inFR because of its robustness in illumination expression andpose compared to holistic feature Yang and Zhang haveapplied multiscale and multidirectional Gabor dictionary tothe SRC for FR [19] which further improves the robustness ofthe algorithm Inspired by previous works in this paper weintegrate Gabor feature into the PCRC framework to improveits robustness

A Gabor filter with multidirection 120583 and multiscale ] isdefined as follows [25 26]

Ψ120583] (z) =10038171003817100381710038171003817k120583]10038171003817100381710038171003817

2

1205752 119890(minusk120583]2z221205752) [119890119894k120583]119911 minus 119890minus12057522]

k120583] = k] sdot 119890119894120595120583 120595120583 = 1205871205838 k] = 119896max

119891] (3)

where the coordinates of the pixel are z = (119909 119910) 119896max isthe maximum frequency and the interval factor of the kernelfunction distance is denoted as 119891 The bandwidth of the filteris determined by 120575 The convolution of the target image Imgand the wavelet kernelΨ120583] is expressed as

G120583] = Img (z) lowastΨ120583] (z) = M120583] sdot exp (119894120579120583] (z)) (4)

where M120583] (120583 = 0 1 7 ] = 0 1 4) is the amplitudeof Gabor 120579120583](119911) is the phase of Gabor and the local energychange in the image is expressed by amplitude informationBecause the Gabor phase changes periodically with the spaceposition and the amplitude is relatively smooth and stable [1925 26] only the magnitude of Gabor was used in this papersuch as Figure 2

32 Measurement Matrix Unfortunately although Gaborfeature can be used to enhance the robustness of faceimage representation it brings higher dimensions to thetraining sets than holistic feature does In other words thecomputation cost and computation time are increased Inorder to solve the problem caused by higher dimension ofthe training sets a further dimension reduction is necessaryWe proposed to use PCA [31ndash33] for our method The stepsof PCA are as follows assuming 119873 sample images x119894 isin R119898119894 = 1 2 119873 Firstly normalize each sample (subtractthe mean and then divide the variance) convert vector119909lowast119894 which accord with normal distribution (0 1) Secondlycompute the eigenvectors of covariance matrix 119876 = 119883119883119879119883 = [119909lowast1 119909lowast2 119909lowast119873] 120582119864 = 119876119864 119864 is the eigenvectorcorresponding to the eigenvalue 120582 Thirdly the eigenvectoris sorted according to the size of the eigenvalues the first119901 eigenvectors are extracted to form a linear transformationmatrix 119882PCA isin R119898times119901 We can use 119910119894 = 119882119879PCA119909119894 to reducedimension But if 119898 ≫ 119873 the dimension of 119883119883119879 isinR119898times119898 would be very large In order to solve this problemsingular value decomposition is usually usedThe eigenvalues

4 Mathematical Problems in Engineering

Gabor feature

Magnitude featurePatches

middotmiddotmiddot

Figure 2 Patch based collaborative representation using Gabor feature

120582 and eigenvectors 120583119894 of 119883119883119879 are obtained by calculatingthe eigenvalues 120575119894 and eigenvectors ]119894 of 119883119879119883 (the first 119901)120583119894 = (1radic120575119894)119883]119894 119894 = 1 2 119901

In summary we found that PCA and its related algo-rithms have two obvious shortcomings [37 38] Firstly theleading eigenvectors encode mostly illumination and expres-sion rather than discriminating information Secondly in theactual calculation the amount of calculation is very large andwill fail in small samples

Inspired by the method of using random face for featureextraction illustrated in this literature [38] we used theRandom Gaussian matrices Φ isin R119872times119873 (119872 ≪ 119873) as a mea-surement matrix to measure face images The measurementmatrix Φ is used to measure the redundant dictionary X toobtain 119860 = ΦX isin R119872times119878 where X = [1199091 1199092 119909119878] isinR119873times119878 In Figure 3(a) for any test image 119909 measurements 119910were obtained by 119910 = Φ119909 isin R119872times1 In essence utilizingthe measurement matrix to reduce the dimension of theimage is different from the sparse representation theory Thedimension 119872 of the measurements 119910 was measured by themeasurement matrix Φ and was not limited by the numberof training samples

However some literatures [35 39] suggested that theRandom Gaussian matrices are uncertain and limit itspractical application And Toplitz and cyclic matrices wereproposed for signal reconstruction The Toplitz and cyclicmatrices rotate the row vectors to generate all matricesUsually the value of the vector in Toplitz and cyclic matricesis plusmn1 and each element is independent of the Bernoullidistribution Therefore it is easy to implement hardware inpractical application Based on the above analysis we furtherused Toplitz and cyclic matrices and their improved method(namely the Deterministic sparse Toeplitz matrices [36]) forour method Some of the relevant measurement matricesused in this paper will be described in detail The operat-ing mechanism of each measurement matrix is shown inFigure 3

321 Random Gaussian Matrices The format of 119898 times 119899Random Gaussian matrices is expressed as follows [34 38]

[[[[[[[

11988611 11988612 sdot sdot sdot 119886111989911988621 11988622 sdot sdot sdot 1198862119899 d

1198861198981 1198861198982 sdot sdot sdot 119886119898119899

]]]]]]]

(5)

each element 119886119894119895 is independently subject to a Gaussiandistribution whose mean is 0 and variance is 1119898

322 Toplitz andCyclicMatrices Theconcrete formof119872times119873Toplitz and cyclic matrices [35 39] is presented below

[[[[[[[

119886119873 119886119873minus1 sdot sdot sdot 1198861119886119873+1 119886119873 sdot sdot sdot 1198862 d

119886119873+119872minus1 119886119873+119872minus2 sdot sdot sdot 119886119872

]]]]]]]

(6)

[[[[[[[

119886119873 119886119873minus1 sdot sdot sdot 11988611198861 119886119873 sdot sdot sdot 1198862 d

119886119872minus1 119886119872minus2 sdot sdot sdot 119886119872

]]]]]]]

(7)

Equation (6) is the Toplitz matrices the main diagonal 119886119894119895 =119886119894+1119895+1 are constants If 119886119894 = 119886119873+119894 the additional conditionof (6) is satisfied then it becomes a cyclic matrices and itselement 119886119894119873+119872minus1119894=1 follows a certain probability distribution119875(119886)323 Deterministic Sparse Toeplitz Matrices The construc-tion of Deterministic sparse Toeplitz matrices [36] is basedon the Toplitz and cyclic matrices which is illustrated in thispaper by the example of the random spacing Toplitz matriceswith an interval of Δ = 2 The independent elements in thefirst row and the first column of (6) constitute the vector T

[119886119873 119886119873minus1 sdot sdot sdot 1198861 119886119873+1 sdot sdot sdot 119886119873+119872minus1] (8)Conducting random sparse spacing to (7) one can seethat 119879 contains all the independent elements in (8) Then

Mathematical Problems in Engineering 5

y x

measurements

Φ

Mtimes 1

Mtimes N

=

(a) The linear measurement process of measurementmatrix

The gabor feature of an image patch(N dimension)

Random Gaussian matrices

Toeplitz and cyclic matrices

Deterministic sparse Toeplitz

Deterministic sparse Toeplitz

Deterministic sparse Toeplitz

(Mtimes N)

(Mtimes N)

(Mtimes N)

(Mtimes N)

(Mtimes N)

matrices (Δ = 4)

matrices (Δ = 8)

matrices (Δ = 16)

(b) The structure of the measurement matrix

Figure 3 The operating mechanism of measurement matrix

assignment operations are performed on 119879 where the ele-ments 119886119894 (119894 isin Λ Λ is the (119873 + 119872 minus 1)Δ indexesrandomly chosen from the index sequence 1 rarr 119873 + 119872 minus1) obey the independent and identically distributed (iid)Random Gaussian distribution while the other elements are0 Finally we obtained the Deterministic sparse Toeplitzmatrices 119886119894+1119895+1 = 119886119894119895 according to the characteristics ofconstruction of the Toplitz and cyclic matrices

33 The Proposed Face Recognition Approach Although thePCRC can indeed solve the problem of small sample size thismethod is still based on the original feature of the patch andthe robustness and accuracy are yet to be improved Based onthe above analysis Gabor feature and measurement matrixare infused into the PCRC for FR which not only solves theproblemof small sample size but also enhances the robustnessand efficiency of the method

Our proposed method for FR is summarized as follows

Step 1 (input) Face training sample X and test sample y

Step 2 (patch) Divide the 119896 face training samples X into 119904patches and divide the test sample y into 119904 patches where y119894andM119894119895 are the position corresponding to the sample patches

X =[[[[[[[[[[[

M1M119894M119904

]]]]]]]]]]]

=

[[[[[[[[[[[[

M11 sdot sdot sdot M1119895 sdot sdot sdot M1119896

M1198941 sdot sdot sdot M119894119895 sdot sdot sdot M119894119896

M1199041 sdot sdot sdot M119904119895 sdot sdot sdot M119904119896

]]]]]]]]]]]]

y =[[[[[[[[[[[

y1y119894y119904

]]]]]]]]]]]

(9)

6 Mathematical Problems in Engineering

Step 3 (extract and measure features) (1) Extract Gaborfeature from patches of training samples and test samplerespectively to obtain

G =[[[[[[[[[[[

G1G119894G119904

]]]]]]]]]]]

=

[[[[[[[[[[[[

G11 sdot sdot sdot G1119895 sdot sdot sdot G1119896

G1198941 sdot sdot sdot G119894119895 sdot sdot sdot G119894119896

G1199041 sdot sdot sdot G119904119895 sdot sdot sdot G119904119896

]]]]]]]]]]]]

Gy =[[[[[[[[[[[

Gy1

Gy119894

Gy119904

]]]]]]]]]]]

(10)

(2) Carry out themeasurement and reduce the dimensionof each patch G119894119895 (119894 = 1 2 119904 119895 = 1 2 119896) using themeasurement matrixΦ to obtain measured signal

A =[[[[[[[[[[[

A1A119894A119904

]]]]]]]]]]]

=

[[[[[[[[[[[[

a11 sdot sdot sdot a1119895 sdot sdot sdot a1119896 a1198941 sdot sdot sdot a119894119895 sdot sdot sdot a119894119896 a1199041 sdot sdot sdot a119904119895 sdot sdot sdot a119904119896

]]]]]]]]]]]]

=

[[[[[[[[[[[[

ΦG11 sdot sdot sdot ΦG1119895 sdot sdot sdot ΦG1119896 ΦG1198941 sdot sdot sdot ΦG119894119895 sdot sdot sdot ΦG119894119896 ΦG1199041 sdot sdot sdot ΦG119904119895 sdot sdot sdot ΦG119904119896

]]]]]]]]]]]]

B =[[[[[[[[[[[

b1b119894b119904

]]]]]]]]]]]

=[[[[[[[[[[[

ΦGy1ΦGy119894ΦGy119904

]]]]]]]]]]]

(11)

Step 4 Collaborative representation of each patch measure-ment signal bi of the test sample over the training samplemeasurement signal local dictionary Ai

119894 = argmin120588119894

1003817100381710038171003817b119894 minus A119894120588119894100381710038171003817100381722 + 120582 1003817100381710038171003817120588119894100381710038171003817100381722 119894 = 1 2 119904 (12)

Equation (12) can be easily derived as follows

119894 = Pb119894 P = (A119879119894 A119894 + 120582I)minus1 A119879119894 119894 = 1 2 119904 (13)

where I is unit matrix

Step 5 Recognition results of patch y119894 are

119903119894119895 (yi) =10038171003817100381710038171003817bi minus aij119894

100381710038171003817100381710038172100381710038171003817100381711989410038171003817100381710038172119894 = 1 2 119904 119895 = 1 2 119896

ID (119903119894119895) = argmin119895

[119903119894119895 (yi)] (14)

Step 6 Use voting to obtain the final recognition result

4 Experimental Analysis

The proposed approaches GPCRC and its improved meth-ods were evaluated in three publicly available databasesExtended Yale B [40ndash42] CMU PIE [43 44] and LFW [4546] To show the effectiveness of the GPCRC we comparedour methods with four classical methods and their improvedmethods namely SRC [13] CRC [15] SSRC [14] and PCRC[16]Then we tested our GPCRC using several measurementmatrix namely Random Gaussian matrices Toeplitz andcyclic matrices and Deterministic sparse Toeplitz matrices(Δ = 4 8 16) to reduce the dimension of the transformedsignal and compare their performance with the conventionalPCA methods In all the following experiments we ran theMATLAB 2015b on a typical Intel(R) Core(TM) i5-3470CPU 320GHzWindows7 x64 PC In our implementation ofGabor filters the parameters were set as 119891 = radic2 119870max =1205872 120590 = 120587 120583 = 0 1 7 ] = 0 1 4 for all theexperiments below All the programs were run 20 times oneach database and the mean and variance of each methodwere reported In order to report the best result of eachmethod the parameter120582used in SRC SSRCCRC andPCRCwas set to 0001 [13] 0001 [14] 0005 [15] and 0001 [16 17]respectively The patch sizes were used in PCRC and GPCRCand its improved methods are 8 times 8 [16 17] and 16 times 16respectively

41 Extended Yale B To test the robustness of the proposedmethod on illumination we used the classic Extended YaleB database [40ndash42] because faces from Extended Yale Bdatabase were acquired in different illumination conditionsThe Extended Yale B database contains 38 human subjectsunder 9 poses and 64 illumination conditions For obviouscomparison all frontal-face images marked with P00 wereused in our experiment and the face images size was down-sampled to 32 times 32 10 20 faces from each individual wereselected randomly as training sample 30 others from eachindividual were selected as test sample Figure 4 shows someP00 marked samples from the Extended Yale B databaseThe experimental results of each method are shown inTable 1 In Figure 5 we compared the recognition rates of

Mathematical Problems in Engineering 7

Table 1 Recognition rate () on Extended Yale B database

Features Methods Training sample size10 20

Intensity feature

SRC 7615 plusmn 960 8050 plusmn 432CRC 7860 plusmn 1058 8300 plusmn 664SSRC 8753 plusmn 791 9001 plusmn 438PCRC 9044 plusmn 109 9425 plusmn 101

LBP feature

SRC 8356 plusmn 1132 8974 plusmn 952CRC 8302 plusmn 1019 9030 plusmn 327SSRC 9058 plusmn 639 9589 plusmn 301PCRC 7967 plusmn 1257 8725 plusmn 508

Gabor feature

SRC 8525 plusmn 536 9150 plusmn 350CRC 8618 plusmn 443 9000 plusmn 280SSRC 8706 plusmn 949 9224 plusmn 506PCRC 9434 plusmn 206 9883 plusmn 094

Figure 4 Some marked samples of the Extended Yale B database

Feature Dimension

101520253035404550556065707580859095

100

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(a) 10 training samples

Reco

gniti

on ra

te (

)

Feature Dimension

05

101520253035404550556065707580859095

100

0 100 200 300 400 500 600 700 800 900 10001100

Random Gaussian matricesToeplitz and cyclic matrices

PCA

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(b) 20 training samples

Figure 5 The performance of each dimension reduction method for GPCRC on Extended Yale B database

8 Mathematical Problems in Engineering

Table 2 The speed of each dimension reduction methods

Method Random Gaussian Toeplitz and cyclic PCATime(s) 11131 10836 417295Method Deterministic sparse Toeplitz

(Δ = 4) (Δ = 8) (Δ = 16)Time(s) 12643 12226 12678

Figure 6 Some marked samples of the CMU PIE database

various methods with the space dimensions 32 64 128256 512 and 1024 for each patch in GPCRC and thosenumbers correspond to downsampling ratios of 1320 1160180 140 120 and 110 respectively In Table 1 it can beclearly seen that GPCRC achieves the highest recognitionrate performance in all experiments In Figure 6 we cansee the performance of various dimensionality reductionmethods for GPCRC 10 samples of each individual inFigure 5(a) were used as training samples When the featuredimension is low (le256) the best performance of PCA is9280 (256 dimension) which is significantly higher thanthat of the measurement matrix at 256 dimension RandomGaussian matrices are 8877 Toeplitz and cyclic matricesare 8517 Deterministic sparse Toeplitz matrices (Δ = 4)are 8667 Deterministic sparse Toeplitz matrices (Δ = 8)are 8569 and Deterministic sparse Toeplitz matrices (Δ =16) are 8605 but none of these has been as good as theperformance of the original dimension (10240 dimension)So the reason why the performance of PCA is significantlybetter than the measurement matrix at low dimension canbe analyzed through their operation mechanism the PCAtransforms the original data into a set of linearly independentrepresentations of each dimension in a linear transformationwhich can extract the principal component of the dataThese principal components can represent image signalsmore accurately [31ndash33] The theory of compressed sensing[34ndash36] states that when the signal is sparse or compressiblein a transform domain the measurement matrix whichis noncoherent with the transform matrix can be used toproject transform coefficients to the low-dimensional vec-tor and this projection can maintain the information forsignal reconstruction The compressed sensing technologycan achieve the reconstruction with high accuracy or highprobability using small number of projection data In the caseof extremely low dimension the reconstruction informationis insufficient and the original signal cannot be reconstructedaccurately So the recognition rate is not high When thedimension is increased and the reconstructed informationcan more accurately reconstruct the original signal the

recognition rate will improve When the feature dimension ishigher (ge512) PCA cannot work because of the sample sizeAt 512 dimension the measurement matrix (Deterministicsparse Toeplitz matrices (Δ = 8) are 9521 Deterministicsparse Toeplitz matrices (Δ = 16) are 9491) has reached theperformance of the original dimension (10240 dimension)and the dimension is only 120 of the original dimension Atthe 110 of the original dimension the performance of themeasurement matrix has basically achieved the performanceof the original dimension Random Gaussian matrices are9560 Deterministic sparse Toeplitz matrices (Δ = 4) are9464 Deterministic sparse Toeplitz matrices (Δ = 8) are9456 and Deterministic sparse Toeplitz matrices (Δ = 16)are 9498 In Figure 6(b) 20 samples of each individualwere used as training samples When the feature dimensionle 512 the best performance is PCA which is 9750 (512dimension) At 1024 dimension PCA cannot work andthe performance of the measurement matrix has basicallyachieved the performance of the original dimension RandomGaussian matrices are 9880 Deterministic sparse Toeplitzmatrices (Δ = 4) are 9868 Deterministic sparse Toeplitzmatrices (Δ = 8) are 9901 and Deterministic sparseToeplitzmatrices (Δ = 16) are 9877 In Table 2 we fixed thedimension of each reductionmethods at 512 dimension in thecase of 20 training samples In general the complexity of PCAis119874(1198993) [47] where 119899 is the number of rows of the covariancematrix The example we provided consists of 38 individualsand each individual contains 20 samples Since the numberof Gabor features (10240) for each patch far outweighs thetotal number of training samples (760) the number of rowsof the covariancematrix is determined by the total number oftraining samples and the complexity of PCA is approximately119874(7603) However in the proposed measurement matrixalgorithm the measurement matrix is formed by a certaindeterministic distribution (eg Gaussian distribution) andstructure according to the required dimension and length ofthe signal so that the measurement matrix demonstrates lowcomplexity [36 48 49] The actual speed of each dimensionreduction methods for one patch of all samples (including alltraining samples and all testing samples) is listed Based onthe above analysis we can see that the PCA method has themost outstanding performance but it is limited by the samplesize and it is time-consumingThemeasurement matrix doesnot perform well in low dimension but is not limited by thesample size When the dimension reaches a certain value itsperformance was the same as that of the original dimensionThus the dimension of the data can be reduced without lossof recognition rate

Mathematical Problems in Engineering 9

15

20

25

30

35

40

45

50

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(a) 2 training samples

10

15

20

25

30

35

40

45

50

55

60

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 10001100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(b) 3 training samples

05

10152025303540455055606570

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 10001100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(c) 4 training samples

05

101520253035404550556065707580

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(d) 5 training samples

Figure 7 The performance of each dimension reduction method for GPCRC on CMU PIE database

42 CMU PIE In order to further test the robustness inillumination pose and expression we utilized a currentlypopular database CMU PIE [43 44] a database consistingof 41368 images of 68 people for each person there are 13different poses 43 different illumination conditions and 4different expressions To testify the advantage of our methodin small sample size condition we randomly selected 2 3 4

and 5 face samples from each individual as training sampleand other 40 face samples from each individual as test sam-plesThe face images size is downsampled to 32times32 Figure 6shows some marked samples of the CMU PIE database Theexperimental results of each method are shown in Table 3GPCRC has the best results In Figure 7 we compared therecognition rates of various dimension reduction methods

10 Mathematical Problems in Engineering

Table 3 Recognition rate () on CMU PIE database

Features Method Training sample size2 3 4 5

Intensity feature

SRC 2821 plusmn 543 3936 plusmn 982 4473 plusmn 1529 6054 plusmn 645CRC 3034 plusmn 615 4062 plusmn 920 4435 plusmn 742 6103 plusmn 782SSRC 3752 plusmn 834 5474 plusmn 850 5732 plusmn 982 6673 plusmn 885PCRC 4232 plusmn 117 5646 plusmn 188 6116 plusmn 291 7047 plusmn 387

LBP feature

SRC 3842 plusmn 646 5337 plusmn 710 5367 plusmn 521 6331 plusmn 724CRC 3732 plusmn 775 5483 plusmn 615 5427 plusmn 831 6425 plusmn 690SSRC 4317 plusmn 571 5684 plusmn 921 6453 plusmn 476 6825 plusmn 435PCRC 4121 plusmn 710 5221 plusmn 1176 6254 plusmn 621 7022 plusmn 307

Gabor feature

SRC 4143 plusmn 420 5336 plusmn 540 5358 plusmn 732 6679 plusmn 545CRC 4025 plusmn 515 5535 plusmn 328 5668 plusmn 742 6624 plusmn 618SSRC 4455 plusmn 600 5596 plusmn 620 6250 plusmn 316 7041 plusmn 479PCRC 4857 plusmn 129 5845 plusmn 194 6882 plusmn 256 7318 plusmn 244

for GPCRC Similarly we can see that PCA cannot achievethe best recognition rate because of the sample size limitand at 1024 dimension the performance of the measurementmatrix has basically achieved the performance of the originaldimensionThe recognition rate of eachmeasurementmatrixis shown below

(i) Random Gaussian matrices are 4873 (2 trainingsamples) 5966 (3 training samples) 6886 (4training samples) and 7360 (5 training samples)

(ii) Toeplitz and cyclic matrices are 4889 (2 trainingsamples) 5830 (3 training samples) 6544 (4training samples) and 7312 (5 training samples)

(iii) Deterministic sparse Toeplitz matrices (Δ = 4)are 4763 (2 training samples) 5868 (3 trainingsamples) 6703 (4 training samples) and 7178 (5training samples)

(iv) Deterministic sparse Toeplitz matrices (Δ = 8)are 4869 (2 training samples) 5855 (3 trainingsamples) 6891 (4 training samples) and 7427 (5training samples)

(v) Deterministic sparse Toeplitz matrices (Δ = 16)are 4694 (2 training samples) 6000 (3 trainingsamples) 6625 (4 training samples) and 7169 (5training samples)

Through the above analysis we further validated the feasibil-ity of the measurement matrix in the premise of ensuring therecognition rate

43 LFW In practice the number of training samples is verylimited and only one or two can be obtained from individualidentity documents To simulate the actual situationwe selectthe LFWdatabaseThe LFWdatabase includes frontal imagesof 5749 different subjects in unconstrained environment[45] LFW-a is a version of LFW after alignment using com-mercial face alignment software [46] We chose 158 subjectsfrom LFW-a each subject contains no less than ten samplesFor each subject we randomly choose 1 to 2 samples fromeach individual for training and another 5 samples from each

Figure 8 Some marked samples of the LFW database

individual for testing The face images size is downsampledto 32 times 32 Figure 8 shows some marked samples of the LFWdatabaseThe experimental results of eachmethod are shownin Table 4 it can be clearly seen that the GPCRC achievesthe highest recognition rate performance in all experimentswith the training sample size from 1 to 2 In Figure 9 wecan also see the advantages of the measurement matrix insmall sample size At 1024 dimension the performance ofthe measurement matrix in the single training sample isas follows Random Gaussian matrices are 2618 Toeplitzand cyclic matrices are 2468 Deterministic sparse Toeplitzmatrices (Δ = 4) are 2443 Deterministic sparse Toeplitzmatrices (Δ = 8) are 2585 and Deterministic sparseToeplitz matrices (Δ = 16) are 2528 and the performancesof the measurement matrix in the two training samples areas follows Random Gaussian matrices are 3968 Toeplitzand cyclic matrices are 3772 Deterministic sparse Toeplitzmatrices (Δ = 4) are 3756 Deterministic sparse Toeplitzmatrices (Δ = 8) are 3991 and Deterministic sparseToeplitz matrices (Δ = 16) are 3825

5 Conclusion

In order to alleviate the influence of unreliability environmenton small sample size in FR in this paper we applied improvedmethod for Gabor local features to PCRC we proposedto use the measurement matrices to reduce the dimensionof the transformed signal Several important observationscan be summarized as follows (1) The proposed GPCPCmethod can effectively deal with the influence of unreliabilityenvironment in small sample FR (2) The measurement

Mathematical Problems in Engineering 11

Table 4 Recognition rate () on LFW database

Features Method Training sample size1 2

Intensity feature

SRC 1058 plusmn 720 1902 plusmn 829CRC 908 plusmn 815 2039 plusmn 720SSRC 1703 plusmn 587 2852 plusmn 750PCRC 2229 plusmn 396 3329 plusmn 528

LBP feature

SRC 1552 plusmn 554 2531 plusmn 742CRC 1623 plusmn 621 2622 plusmn 657SSRC 1916 plusmn 341 3210 plusmn 426PCRC 2126 plusmn 473 3159 plusmn 611

Gabor feature

SRC 1868 plusmn 423 3135 plusmn 640CRC 1731 plusmn 815 3083 plusmn 520SSRC 2130 plusmn 721 3367 plusmn 433PCRC 2582 plusmn 342 3960 plusmn 354

0

5

10

15

20

25

30

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(a) Single training sample

0

5

10

15

20

25

30

35

40

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 10001100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(b) 2 training samples

Figure 9 The performance of each dimension reduction method for GPCRC on LFW database

matrix proposed to deal with the high dimension in GPCRCmethod can effectively improve the computational efficiencyand computational speed and they were able to overcome thelimitations of the PCA method

Conflicts of Interest

None of the authors have conflicts of interest

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China (no 61672386) Humanities and Social

Sciences Planning Project of Ministry of Education (no16YJAZH071) Anhui Provincial Natural Science Foundationof China (no 1708085MF142) University Natural ScienceResearch Project of Anhui Province (no KJ2017A259) andProvincial Quality Project of Anhui Province (no 2016zy131)

References

[1] J Galbally S Marcel and J Fierrez ldquoBiometric antispoofingmethods a survey in face recognitionrdquo IEEE Access vol 2 pp1530ndash1552 2014

[2] S Nagpal M Singh R Singh and M Vatsa ldquoRegularizeddeep learning for face recognition with weight variationsrdquo IEEEAccess vol 3 pp 3010ndash3018 2015

12 Mathematical Problems in Engineering

[3] J W Tanaka and D Simonyi ldquoThe parts and wholes of facerecognition a review of the literaturerdquoThe Quarterly Journal ofExperimental Psychology no 10 pp 1ndash37 2016

[4] W Yang Z Wang and C Sun ldquoA collaborative representationbased projectionsmethod for feature extractionrdquoPattern Recog-nition vol 48 no 1 pp 20ndash27 2015

[5] O Boiman E Shechtman and M Irani ldquoIn defense of nearest-neighbor based image classificationrdquo in Proceedings of the 26thIEEE Conference on Computer Vision and Pattern Recognition(CVPR rsquo08) pp 1ndash8 June 2008

[6] K VenkataNarayana V Manoj and K KSwathi ldquoEnhancedface recognition based on PCA and SVMrdquo International Journalof Computer Applications vol 117 no 2 pp 40ndash42 2015

[7] E Owusu and Q R Mao ldquoAn svm cadaboost-based face detec-tion systemrdquo Journal of Experimental amp Theoretical ArtificialIntelligence vol 26 no 4 pp 477ndash491 2014

[8] B V Dasarathy ldquoNearest neighbor (NN) norms NN patternclassification techniquesrdquo Los Alamitos IEEE Computer SocietyPress vol 13 no 100 pp 21ndash27 1991

[9] Y Liu S S Ge C Li and Z You ldquo120581-NS a classifier by thedistance to the nearest subspacerdquo IEEE Transactions on NeuralNetworks and Learning Systems vol 22 no 8 pp 1256ndash12682011

[10] J Hamm and D D Lee ldquoGrassmann discriminant analysis aunifying view on subspace-based learningrdquo in Proceedings ofthe 25th International Conference onMachine Learning pp 376ndash383 July 2008

[11] R Wang S Shan X Chen and W Gao ldquoManifold-manifolddistance with application to face recognition based on imagesetrdquo in Proceedings of the 26th IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo08) pp 1ndash8 June 2008

[12] J Chien and C Wu ldquoDiscriminant waveletfaces and nearestfeature classifiers for face recognitionrdquo IEEE Transactions onPattern Analysis and Machine Intelligence vol 24 no 12 pp1644ndash1649 2002

[13] J Wright A Ganesh Z Zhou A Wagner and Y Ma ldquoDemorobust face recognition via sparse representationrdquo in Proceed-ings of the 8th IEEE International Conference on Automatic Faceamp Gesture Recognition (FG rsquo08) pp 1-2 IEEE Amsterdam TheNetherlands September 2008

[14] W Deng J Hu and J Guo ldquoIn defense of sparsity based facerecognitionrdquo in Proceedings of the 26th IEEE Conference onComputer Vision and Pattern Recognition (CVPR rsquo13) pp 399ndash406 June 2013

[15] L Zhang M Yang and X Feng ldquoSparse representation orcollaborative representation Which helps face recognitionrdquo inProceedings of the IEEE International Conference on ComputerVision (ICCV rsquo11) pp 471ndash478 Barcelona Spain November2011

[16] P Zhu L Zhang Q Hu and S C K Shiu ldquoMulti-scale patchbased collaborative representation for face recognition withmargin distribution optimizationrdquo in European Conference onComputer Vision vol 7572 of Lecture Notes in Computer Sciencepp 822ndash835 2012

[17] D Yang W Chen J Wang and Y Xu ldquoPatch based face recog-nition via fast collaborative representation based classificationand expression insensitive two-stage votingrdquo International Con-ference on Computational Science and Its Applications vol 9787pp 562ndash570 2016

[18] J Lai and X Jiang ldquoClasswise sparse and collaborative patchrepresentation for face recognitionrdquo IEEE Transactions onImage Processing vol 25 no 7 pp 3261ndash3272 2016

[19] M Yang and L Zhang ldquoGabor feature based sparse represen-tation for face recognition with gabor occlusion dictionaryrdquo inProceedings of the 11th European Conference on Computer Vision(ECCV rsquo10) pp 448ndash461 Springer Crete Greece 2010

[20] M R D Rodavia O Bernaldez and M Ballita ldquoWeb andmobile based facial recognition security system using Eigen-faces algorithmrdquo in Proceedings of the 2016 IEEE InternationalConference on Teaching Assessment and Learning for Engineer-ing (TALE rsquo16) pp 86ndash92 Bangkok Thailand December 2016

[21] P N Belhumeur J P Hespanha and D J Kriegman ldquoEigen-faces vs fisherfaces recognition using class specific linearprojectionrdquo IEEE Transactions on Pattern Analysis andMachineIntelligence vol 19 no 7 pp 711ndash720 1997

[22] L P Yang W G Gong W H Li and X H Gu ldquoRandomsampling subspace locality preserving projection for face recog-nitionrdquo Optics amp Precision Engineering vol 16 no 8 pp 1465ndash1470 2008

[23] X Sun M Chen and A Hauptmann ldquoAction recognition vialocal descriptors andholistic featuresrdquo inProceedings of the 2009IEEE Conference on Computer Vision and Pattern Recognition(CVPR rsquo09) pp 58ndash65 June 2009

[24] X Wang Y Zhang X Mu and F S Zhang ldquoThe facerecognition algorithm based on improved lbprdquo InternationalJournal of Optoelectronic Engineering vol 1 no 4 pp 76ndash802012

[25] J Ahmad U Ali and R J Qureshi ldquoFusion of thermal andvisual images for efficient face recognition using gabor filterrdquo inProceedings of the IEEE International Conference on ComputerSystems and Applications pp 135ndash139 March 2006

[26] V Struc R Gajsek and N Pavesic ldquoPrincipal gabor filters forface recognitionrdquo in Proceedings of the IEEE 3rd InternationalConference on Biometrics Theory Applications and Systems(BTAS rsquo09) pp 1ndash6 Washington DC USA September 2009

[27] M Li X Yu K H Ryu S Lee and N Theera-Umpon ldquoFacerecognition technology development with gabor PCA andSVM methodology under illumination normalization condi-tionrdquo Cluster Computing no 3 pp 1ndash10 2017

[28] J Yang D Zhang and A F Frangi ldquoTwo-dimensional PCAa new approach to apperence -based face representationand recognitionrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 26 no 1 pp 131ndash137 2004

[29] H Yu and J Yang ldquoA direct LDA algorithm for high-dimensional data with application to face recognitionrdquo PatternRecognition vol 34 no 10 pp 2067ndash2070 2001

[30] J Kim J Choi J Yi andMTurk ldquoEffective representation usingICA for face recognition robust to local distortion and partialocclusionrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 27 no 12 pp 1977ndash1981 2005

[31] C G Turhan andH S Bilge ldquoClass-wise two-dimensional PCAmethod for face recognitionrdquo IET Computer Vision vol 11 no4 pp 286ndash300 2017

[32] A Mashhoori and M Zolghadri Jahromi ldquoBlock-wise two-directional 2DPCA with ensemble learning for face recogni-tionrdquo Neurocomputing vol 108 pp 111ndash117 2013

[33] M P Rajath Kumar R Keerthi Sravan and K M AishwaryaldquoArtificial neural networks for face recognition using PCA andBPNNrdquo in Proceedings of the 35th IEEE Region 10 Conference(TENCON rsquo15) pp 1ndash6 IEEE Macao China November 2015

[34] E J Candes and T Tao ldquoNear-optimal signal recovery fromrandom projections universal encoding strategiesrdquo Institute ofElectrical and Electronics Engineers Transactions on InformationTheory vol 52 no 12 pp 5406ndash5425 2006

Mathematical Problems in Engineering 13

[35] W U Bajwa J D Haupt G M Raz S J Wright and R DNowak ldquoToeplitz-structured compressed sensing matricesrdquo inProceedings of the IEEESP 14th Workshop on Statistical SignalProcessing (SSP rsquo07) pp 294ndash298 IEEE Madison WI USAAugust 2007

[36] C Zhang H-R Yang and S Wei ldquoCompressive sensing basedon deterministic sparse toeplitz measurement matrices withrandom pitchrdquo Acta Automatica Sinica vol 38 no 8 pp 1362ndash1369 2012

[37] E S Takemura M R Petraglia and A Petraglia ldquoAn imagesuper-resolution algorithm based on wiener filtering andwavelet transformrdquo inProceedings of the 2013 IEEEDigital SignalProcessing and Signal Processing Education Meeting (DSPSPErsquo13) pp 130ndash134 August 2013

[38] M Yang Face Recognition via Sparse Representation WileyEncyclopedia of Electrical and Electronics Engineering 2008

[39] E J Candes J Romberg and T Tao ldquoRobust uncertaintyprinciples exact signal reconstruction from highly incompletefrequency informationrdquo Institute of Electrical and ElectronicsEngineers Transactions on InformationTheory vol 52 no 2 pp489ndash509 2006

[40] A S Georghiades P N Belhumeur and D J Kriegman ldquoFromfew to many illumination cone models for face recognitionunder variable lighting and poserdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 23 no 6 pp 643ndash6602001

[41] N McLaughlin J Ming and D Crookes ldquoLargest matchingareas for illumination and occlusion robust face recognitionrdquoIEEE Transactions on Cybernetics vol 47 no 3 pp 796ndash8082017

[42] Y Xu X Fang J Wu X Li and D Zhang ldquoDiscriminativetransfer subspace learning via low-rank and sparse representa-tionrdquo IEEE Transactions on Image Processing vol 25 no 2 pp850ndash863 2016

[43] H Roy and D Bhattacharjee ldquoLocal-gravity-face (lg-face) forillumination-invariant and heterogeneous face recognitionrdquoIEEE Transactions on Information Forensics and Security vol 11no 7 pp 1412ndash1424 2016

[44] Y Tai J Yang Y Zhang L Luo J Qian and Y ChenldquoFace recognition with pose variations and misalignment viaorthogonal procrustes regressionrdquo IEEE Transactions on ImageProcessing A Publication of the IEEE Signal Processing Societyvol 25 no 6 pp 2673ndash2683 2016

[45] G B Huang M Narayana and E Learned-Miller ldquoLabeledfaces in the wild a database forstudying face recognition inunconstrained environmentsrdquoMonth 2008

[46] LWolf T Hassner and Y Taigman ldquoSimilarity scores based onbackground samplesrdquo in Asian Conference on Computer Visionvol 5995 pp 88ndash97 2009

[47] J Wang Geometric Structure of High-Dimensional Data andDimensionality Reduction Springer 2012

[48] Q Wang J Li and Y Shen ldquoA survey on deterministicmeasurement matrix construction algorithms in compressivesensingrdquo Acta Electronica Sinica vol 41 no 10 pp 2041ndash20502013

[49] WYin SMorgan J Yang andY Zhang ldquoPractical compressivesensing with toeplitz and circulant matricesrdquo in Proceedingsof the Visual Communications and Image Processing 2010 vol7744 July 2010

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 4: SUBMITTED TO MATHEMATICAL PROBLEMS IN …downloads.hindawi.com/journals/mpe/aip/3025264.pdf · SUBMITTED TO MATHEMATICAL PROBLEMS IN ENGINEERING 1 Patch based Collaborative Representation

4 Mathematical Problems in Engineering

Gabor feature

Magnitude featurePatches

middotmiddotmiddot

Figure 2 Patch based collaborative representation using Gabor feature

120582 and eigenvectors 120583119894 of 119883119883119879 are obtained by calculatingthe eigenvalues 120575119894 and eigenvectors ]119894 of 119883119879119883 (the first 119901)120583119894 = (1radic120575119894)119883]119894 119894 = 1 2 119901

In summary we found that PCA and its related algo-rithms have two obvious shortcomings [37 38] Firstly theleading eigenvectors encode mostly illumination and expres-sion rather than discriminating information Secondly in theactual calculation the amount of calculation is very large andwill fail in small samples

Inspired by the method of using random face for featureextraction illustrated in this literature [38] we used theRandom Gaussian matrices Φ isin R119872times119873 (119872 ≪ 119873) as a mea-surement matrix to measure face images The measurementmatrix Φ is used to measure the redundant dictionary X toobtain 119860 = ΦX isin R119872times119878 where X = [1199091 1199092 119909119878] isinR119873times119878 In Figure 3(a) for any test image 119909 measurements 119910were obtained by 119910 = Φ119909 isin R119872times1 In essence utilizingthe measurement matrix to reduce the dimension of theimage is different from the sparse representation theory Thedimension 119872 of the measurements 119910 was measured by themeasurement matrix Φ and was not limited by the numberof training samples

However some literatures [35 39] suggested that theRandom Gaussian matrices are uncertain and limit itspractical application And Toplitz and cyclic matrices wereproposed for signal reconstruction The Toplitz and cyclicmatrices rotate the row vectors to generate all matricesUsually the value of the vector in Toplitz and cyclic matricesis plusmn1 and each element is independent of the Bernoullidistribution Therefore it is easy to implement hardware inpractical application Based on the above analysis we furtherused Toplitz and cyclic matrices and their improved method(namely the Deterministic sparse Toeplitz matrices [36]) forour method Some of the relevant measurement matricesused in this paper will be described in detail The operat-ing mechanism of each measurement matrix is shown inFigure 3

321 Random Gaussian Matrices The format of 119898 times 119899Random Gaussian matrices is expressed as follows [34 38]

[[[[[[[

11988611 11988612 sdot sdot sdot 119886111989911988621 11988622 sdot sdot sdot 1198862119899 d

1198861198981 1198861198982 sdot sdot sdot 119886119898119899

]]]]]]]

(5)

each element 119886119894119895 is independently subject to a Gaussiandistribution whose mean is 0 and variance is 1119898

322 Toplitz andCyclicMatrices Theconcrete formof119872times119873Toplitz and cyclic matrices [35 39] is presented below

[[[[[[[

119886119873 119886119873minus1 sdot sdot sdot 1198861119886119873+1 119886119873 sdot sdot sdot 1198862 d

119886119873+119872minus1 119886119873+119872minus2 sdot sdot sdot 119886119872

]]]]]]]

(6)

[[[[[[[

119886119873 119886119873minus1 sdot sdot sdot 11988611198861 119886119873 sdot sdot sdot 1198862 d

119886119872minus1 119886119872minus2 sdot sdot sdot 119886119872

]]]]]]]

(7)

Equation (6) is the Toplitz matrices the main diagonal 119886119894119895 =119886119894+1119895+1 are constants If 119886119894 = 119886119873+119894 the additional conditionof (6) is satisfied then it becomes a cyclic matrices and itselement 119886119894119873+119872minus1119894=1 follows a certain probability distribution119875(119886)323 Deterministic Sparse Toeplitz Matrices The construc-tion of Deterministic sparse Toeplitz matrices [36] is basedon the Toplitz and cyclic matrices which is illustrated in thispaper by the example of the random spacing Toplitz matriceswith an interval of Δ = 2 The independent elements in thefirst row and the first column of (6) constitute the vector T

[119886119873 119886119873minus1 sdot sdot sdot 1198861 119886119873+1 sdot sdot sdot 119886119873+119872minus1] (8)Conducting random sparse spacing to (7) one can seethat 119879 contains all the independent elements in (8) Then

Mathematical Problems in Engineering 5

y x

measurements

Φ

Mtimes 1

Mtimes N

=

(a) The linear measurement process of measurementmatrix

The gabor feature of an image patch(N dimension)

Random Gaussian matrices

Toeplitz and cyclic matrices

Deterministic sparse Toeplitz

Deterministic sparse Toeplitz

Deterministic sparse Toeplitz

(Mtimes N)

(Mtimes N)

(Mtimes N)

(Mtimes N)

(Mtimes N)

matrices (Δ = 4)

matrices (Δ = 8)

matrices (Δ = 16)

(b) The structure of the measurement matrix

Figure 3 The operating mechanism of measurement matrix

assignment operations are performed on 119879 where the ele-ments 119886119894 (119894 isin Λ Λ is the (119873 + 119872 minus 1)Δ indexesrandomly chosen from the index sequence 1 rarr 119873 + 119872 minus1) obey the independent and identically distributed (iid)Random Gaussian distribution while the other elements are0 Finally we obtained the Deterministic sparse Toeplitzmatrices 119886119894+1119895+1 = 119886119894119895 according to the characteristics ofconstruction of the Toplitz and cyclic matrices

33 The Proposed Face Recognition Approach Although thePCRC can indeed solve the problem of small sample size thismethod is still based on the original feature of the patch andthe robustness and accuracy are yet to be improved Based onthe above analysis Gabor feature and measurement matrixare infused into the PCRC for FR which not only solves theproblemof small sample size but also enhances the robustnessand efficiency of the method

Our proposed method for FR is summarized as follows

Step 1 (input) Face training sample X and test sample y

Step 2 (patch) Divide the 119896 face training samples X into 119904patches and divide the test sample y into 119904 patches where y119894andM119894119895 are the position corresponding to the sample patches

X =[[[[[[[[[[[

M1M119894M119904

]]]]]]]]]]]

=

[[[[[[[[[[[[

M11 sdot sdot sdot M1119895 sdot sdot sdot M1119896

M1198941 sdot sdot sdot M119894119895 sdot sdot sdot M119894119896

M1199041 sdot sdot sdot M119904119895 sdot sdot sdot M119904119896

]]]]]]]]]]]]

y =[[[[[[[[[[[

y1y119894y119904

]]]]]]]]]]]

(9)

6 Mathematical Problems in Engineering

Step 3 (extract and measure features) (1) Extract Gaborfeature from patches of training samples and test samplerespectively to obtain

G =[[[[[[[[[[[

G1G119894G119904

]]]]]]]]]]]

=

[[[[[[[[[[[[

G11 sdot sdot sdot G1119895 sdot sdot sdot G1119896

G1198941 sdot sdot sdot G119894119895 sdot sdot sdot G119894119896

G1199041 sdot sdot sdot G119904119895 sdot sdot sdot G119904119896

]]]]]]]]]]]]

Gy =[[[[[[[[[[[

Gy1

Gy119894

Gy119904

]]]]]]]]]]]

(10)

(2) Carry out themeasurement and reduce the dimensionof each patch G119894119895 (119894 = 1 2 119904 119895 = 1 2 119896) using themeasurement matrixΦ to obtain measured signal

A =[[[[[[[[[[[

A1A119894A119904

]]]]]]]]]]]

=

[[[[[[[[[[[[

a11 sdot sdot sdot a1119895 sdot sdot sdot a1119896 a1198941 sdot sdot sdot a119894119895 sdot sdot sdot a119894119896 a1199041 sdot sdot sdot a119904119895 sdot sdot sdot a119904119896

]]]]]]]]]]]]

=

[[[[[[[[[[[[

ΦG11 sdot sdot sdot ΦG1119895 sdot sdot sdot ΦG1119896 ΦG1198941 sdot sdot sdot ΦG119894119895 sdot sdot sdot ΦG119894119896 ΦG1199041 sdot sdot sdot ΦG119904119895 sdot sdot sdot ΦG119904119896

]]]]]]]]]]]]

B =[[[[[[[[[[[

b1b119894b119904

]]]]]]]]]]]

=[[[[[[[[[[[

ΦGy1ΦGy119894ΦGy119904

]]]]]]]]]]]

(11)

Step 4 Collaborative representation of each patch measure-ment signal bi of the test sample over the training samplemeasurement signal local dictionary Ai

119894 = argmin120588119894

1003817100381710038171003817b119894 minus A119894120588119894100381710038171003817100381722 + 120582 1003817100381710038171003817120588119894100381710038171003817100381722 119894 = 1 2 119904 (12)

Equation (12) can be easily derived as follows

119894 = Pb119894 P = (A119879119894 A119894 + 120582I)minus1 A119879119894 119894 = 1 2 119904 (13)

where I is unit matrix

Step 5 Recognition results of patch y119894 are

119903119894119895 (yi) =10038171003817100381710038171003817bi minus aij119894

100381710038171003817100381710038172100381710038171003817100381711989410038171003817100381710038172119894 = 1 2 119904 119895 = 1 2 119896

ID (119903119894119895) = argmin119895

[119903119894119895 (yi)] (14)

Step 6 Use voting to obtain the final recognition result

4 Experimental Analysis

The proposed approaches GPCRC and its improved meth-ods were evaluated in three publicly available databasesExtended Yale B [40ndash42] CMU PIE [43 44] and LFW [4546] To show the effectiveness of the GPCRC we comparedour methods with four classical methods and their improvedmethods namely SRC [13] CRC [15] SSRC [14] and PCRC[16]Then we tested our GPCRC using several measurementmatrix namely Random Gaussian matrices Toeplitz andcyclic matrices and Deterministic sparse Toeplitz matrices(Δ = 4 8 16) to reduce the dimension of the transformedsignal and compare their performance with the conventionalPCA methods In all the following experiments we ran theMATLAB 2015b on a typical Intel(R) Core(TM) i5-3470CPU 320GHzWindows7 x64 PC In our implementation ofGabor filters the parameters were set as 119891 = radic2 119870max =1205872 120590 = 120587 120583 = 0 1 7 ] = 0 1 4 for all theexperiments below All the programs were run 20 times oneach database and the mean and variance of each methodwere reported In order to report the best result of eachmethod the parameter120582used in SRC SSRCCRC andPCRCwas set to 0001 [13] 0001 [14] 0005 [15] and 0001 [16 17]respectively The patch sizes were used in PCRC and GPCRCand its improved methods are 8 times 8 [16 17] and 16 times 16respectively

41 Extended Yale B To test the robustness of the proposedmethod on illumination we used the classic Extended YaleB database [40ndash42] because faces from Extended Yale Bdatabase were acquired in different illumination conditionsThe Extended Yale B database contains 38 human subjectsunder 9 poses and 64 illumination conditions For obviouscomparison all frontal-face images marked with P00 wereused in our experiment and the face images size was down-sampled to 32 times 32 10 20 faces from each individual wereselected randomly as training sample 30 others from eachindividual were selected as test sample Figure 4 shows someP00 marked samples from the Extended Yale B databaseThe experimental results of each method are shown inTable 1 In Figure 5 we compared the recognition rates of

Mathematical Problems in Engineering 7

Table 1 Recognition rate () on Extended Yale B database

Features Methods Training sample size10 20

Intensity feature

SRC 7615 plusmn 960 8050 plusmn 432CRC 7860 plusmn 1058 8300 plusmn 664SSRC 8753 plusmn 791 9001 plusmn 438PCRC 9044 plusmn 109 9425 plusmn 101

LBP feature

SRC 8356 plusmn 1132 8974 plusmn 952CRC 8302 plusmn 1019 9030 plusmn 327SSRC 9058 plusmn 639 9589 plusmn 301PCRC 7967 plusmn 1257 8725 plusmn 508

Gabor feature

SRC 8525 plusmn 536 9150 plusmn 350CRC 8618 plusmn 443 9000 plusmn 280SSRC 8706 plusmn 949 9224 plusmn 506PCRC 9434 plusmn 206 9883 plusmn 094

Figure 4 Some marked samples of the Extended Yale B database

Feature Dimension

101520253035404550556065707580859095

100

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(a) 10 training samples

Reco

gniti

on ra

te (

)

Feature Dimension

05

101520253035404550556065707580859095

100

0 100 200 300 400 500 600 700 800 900 10001100

Random Gaussian matricesToeplitz and cyclic matrices

PCA

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(b) 20 training samples

Figure 5 The performance of each dimension reduction method for GPCRC on Extended Yale B database

8 Mathematical Problems in Engineering

Table 2 The speed of each dimension reduction methods

Method Random Gaussian Toeplitz and cyclic PCATime(s) 11131 10836 417295Method Deterministic sparse Toeplitz

(Δ = 4) (Δ = 8) (Δ = 16)Time(s) 12643 12226 12678

Figure 6 Some marked samples of the CMU PIE database

various methods with the space dimensions 32 64 128256 512 and 1024 for each patch in GPCRC and thosenumbers correspond to downsampling ratios of 1320 1160180 140 120 and 110 respectively In Table 1 it can beclearly seen that GPCRC achieves the highest recognitionrate performance in all experiments In Figure 6 we cansee the performance of various dimensionality reductionmethods for GPCRC 10 samples of each individual inFigure 5(a) were used as training samples When the featuredimension is low (le256) the best performance of PCA is9280 (256 dimension) which is significantly higher thanthat of the measurement matrix at 256 dimension RandomGaussian matrices are 8877 Toeplitz and cyclic matricesare 8517 Deterministic sparse Toeplitz matrices (Δ = 4)are 8667 Deterministic sparse Toeplitz matrices (Δ = 8)are 8569 and Deterministic sparse Toeplitz matrices (Δ =16) are 8605 but none of these has been as good as theperformance of the original dimension (10240 dimension)So the reason why the performance of PCA is significantlybetter than the measurement matrix at low dimension canbe analyzed through their operation mechanism the PCAtransforms the original data into a set of linearly independentrepresentations of each dimension in a linear transformationwhich can extract the principal component of the dataThese principal components can represent image signalsmore accurately [31ndash33] The theory of compressed sensing[34ndash36] states that when the signal is sparse or compressiblein a transform domain the measurement matrix whichis noncoherent with the transform matrix can be used toproject transform coefficients to the low-dimensional vec-tor and this projection can maintain the information forsignal reconstruction The compressed sensing technologycan achieve the reconstruction with high accuracy or highprobability using small number of projection data In the caseof extremely low dimension the reconstruction informationis insufficient and the original signal cannot be reconstructedaccurately So the recognition rate is not high When thedimension is increased and the reconstructed informationcan more accurately reconstruct the original signal the

recognition rate will improve When the feature dimension ishigher (ge512) PCA cannot work because of the sample sizeAt 512 dimension the measurement matrix (Deterministicsparse Toeplitz matrices (Δ = 8) are 9521 Deterministicsparse Toeplitz matrices (Δ = 16) are 9491) has reached theperformance of the original dimension (10240 dimension)and the dimension is only 120 of the original dimension Atthe 110 of the original dimension the performance of themeasurement matrix has basically achieved the performanceof the original dimension Random Gaussian matrices are9560 Deterministic sparse Toeplitz matrices (Δ = 4) are9464 Deterministic sparse Toeplitz matrices (Δ = 8) are9456 and Deterministic sparse Toeplitz matrices (Δ = 16)are 9498 In Figure 6(b) 20 samples of each individualwere used as training samples When the feature dimensionle 512 the best performance is PCA which is 9750 (512dimension) At 1024 dimension PCA cannot work andthe performance of the measurement matrix has basicallyachieved the performance of the original dimension RandomGaussian matrices are 9880 Deterministic sparse Toeplitzmatrices (Δ = 4) are 9868 Deterministic sparse Toeplitzmatrices (Δ = 8) are 9901 and Deterministic sparseToeplitzmatrices (Δ = 16) are 9877 In Table 2 we fixed thedimension of each reductionmethods at 512 dimension in thecase of 20 training samples In general the complexity of PCAis119874(1198993) [47] where 119899 is the number of rows of the covariancematrix The example we provided consists of 38 individualsand each individual contains 20 samples Since the numberof Gabor features (10240) for each patch far outweighs thetotal number of training samples (760) the number of rowsof the covariancematrix is determined by the total number oftraining samples and the complexity of PCA is approximately119874(7603) However in the proposed measurement matrixalgorithm the measurement matrix is formed by a certaindeterministic distribution (eg Gaussian distribution) andstructure according to the required dimension and length ofthe signal so that the measurement matrix demonstrates lowcomplexity [36 48 49] The actual speed of each dimensionreduction methods for one patch of all samples (including alltraining samples and all testing samples) is listed Based onthe above analysis we can see that the PCA method has themost outstanding performance but it is limited by the samplesize and it is time-consumingThemeasurement matrix doesnot perform well in low dimension but is not limited by thesample size When the dimension reaches a certain value itsperformance was the same as that of the original dimensionThus the dimension of the data can be reduced without lossof recognition rate

Mathematical Problems in Engineering 9

15

20

25

30

35

40

45

50

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(a) 2 training samples

10

15

20

25

30

35

40

45

50

55

60

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 10001100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(b) 3 training samples

05

10152025303540455055606570

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 10001100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(c) 4 training samples

05

101520253035404550556065707580

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(d) 5 training samples

Figure 7 The performance of each dimension reduction method for GPCRC on CMU PIE database

42 CMU PIE In order to further test the robustness inillumination pose and expression we utilized a currentlypopular database CMU PIE [43 44] a database consistingof 41368 images of 68 people for each person there are 13different poses 43 different illumination conditions and 4different expressions To testify the advantage of our methodin small sample size condition we randomly selected 2 3 4

and 5 face samples from each individual as training sampleand other 40 face samples from each individual as test sam-plesThe face images size is downsampled to 32times32 Figure 6shows some marked samples of the CMU PIE database Theexperimental results of each method are shown in Table 3GPCRC has the best results In Figure 7 we compared therecognition rates of various dimension reduction methods

10 Mathematical Problems in Engineering

Table 3 Recognition rate () on CMU PIE database

Features Method Training sample size2 3 4 5

Intensity feature

SRC 2821 plusmn 543 3936 plusmn 982 4473 plusmn 1529 6054 plusmn 645CRC 3034 plusmn 615 4062 plusmn 920 4435 plusmn 742 6103 plusmn 782SSRC 3752 plusmn 834 5474 plusmn 850 5732 plusmn 982 6673 plusmn 885PCRC 4232 plusmn 117 5646 plusmn 188 6116 plusmn 291 7047 plusmn 387

LBP feature

SRC 3842 plusmn 646 5337 plusmn 710 5367 plusmn 521 6331 plusmn 724CRC 3732 plusmn 775 5483 plusmn 615 5427 plusmn 831 6425 plusmn 690SSRC 4317 plusmn 571 5684 plusmn 921 6453 plusmn 476 6825 plusmn 435PCRC 4121 plusmn 710 5221 plusmn 1176 6254 plusmn 621 7022 plusmn 307

Gabor feature

SRC 4143 plusmn 420 5336 plusmn 540 5358 plusmn 732 6679 plusmn 545CRC 4025 plusmn 515 5535 plusmn 328 5668 plusmn 742 6624 plusmn 618SSRC 4455 plusmn 600 5596 plusmn 620 6250 plusmn 316 7041 plusmn 479PCRC 4857 plusmn 129 5845 plusmn 194 6882 plusmn 256 7318 plusmn 244

for GPCRC Similarly we can see that PCA cannot achievethe best recognition rate because of the sample size limitand at 1024 dimension the performance of the measurementmatrix has basically achieved the performance of the originaldimensionThe recognition rate of eachmeasurementmatrixis shown below

(i) Random Gaussian matrices are 4873 (2 trainingsamples) 5966 (3 training samples) 6886 (4training samples) and 7360 (5 training samples)

(ii) Toeplitz and cyclic matrices are 4889 (2 trainingsamples) 5830 (3 training samples) 6544 (4training samples) and 7312 (5 training samples)

(iii) Deterministic sparse Toeplitz matrices (Δ = 4)are 4763 (2 training samples) 5868 (3 trainingsamples) 6703 (4 training samples) and 7178 (5training samples)

(iv) Deterministic sparse Toeplitz matrices (Δ = 8)are 4869 (2 training samples) 5855 (3 trainingsamples) 6891 (4 training samples) and 7427 (5training samples)

(v) Deterministic sparse Toeplitz matrices (Δ = 16)are 4694 (2 training samples) 6000 (3 trainingsamples) 6625 (4 training samples) and 7169 (5training samples)

Through the above analysis we further validated the feasibil-ity of the measurement matrix in the premise of ensuring therecognition rate

43 LFW In practice the number of training samples is verylimited and only one or two can be obtained from individualidentity documents To simulate the actual situationwe selectthe LFWdatabaseThe LFWdatabase includes frontal imagesof 5749 different subjects in unconstrained environment[45] LFW-a is a version of LFW after alignment using com-mercial face alignment software [46] We chose 158 subjectsfrom LFW-a each subject contains no less than ten samplesFor each subject we randomly choose 1 to 2 samples fromeach individual for training and another 5 samples from each

Figure 8 Some marked samples of the LFW database

individual for testing The face images size is downsampledto 32 times 32 Figure 8 shows some marked samples of the LFWdatabaseThe experimental results of eachmethod are shownin Table 4 it can be clearly seen that the GPCRC achievesthe highest recognition rate performance in all experimentswith the training sample size from 1 to 2 In Figure 9 wecan also see the advantages of the measurement matrix insmall sample size At 1024 dimension the performance ofthe measurement matrix in the single training sample isas follows Random Gaussian matrices are 2618 Toeplitzand cyclic matrices are 2468 Deterministic sparse Toeplitzmatrices (Δ = 4) are 2443 Deterministic sparse Toeplitzmatrices (Δ = 8) are 2585 and Deterministic sparseToeplitz matrices (Δ = 16) are 2528 and the performancesof the measurement matrix in the two training samples areas follows Random Gaussian matrices are 3968 Toeplitzand cyclic matrices are 3772 Deterministic sparse Toeplitzmatrices (Δ = 4) are 3756 Deterministic sparse Toeplitzmatrices (Δ = 8) are 3991 and Deterministic sparseToeplitz matrices (Δ = 16) are 3825

5 Conclusion

In order to alleviate the influence of unreliability environmenton small sample size in FR in this paper we applied improvedmethod for Gabor local features to PCRC we proposedto use the measurement matrices to reduce the dimensionof the transformed signal Several important observationscan be summarized as follows (1) The proposed GPCPCmethod can effectively deal with the influence of unreliabilityenvironment in small sample FR (2) The measurement

Mathematical Problems in Engineering 11

Table 4 Recognition rate () on LFW database

Features Method Training sample size1 2

Intensity feature

SRC 1058 plusmn 720 1902 plusmn 829CRC 908 plusmn 815 2039 plusmn 720SSRC 1703 plusmn 587 2852 plusmn 750PCRC 2229 plusmn 396 3329 plusmn 528

LBP feature

SRC 1552 plusmn 554 2531 plusmn 742CRC 1623 plusmn 621 2622 plusmn 657SSRC 1916 plusmn 341 3210 plusmn 426PCRC 2126 plusmn 473 3159 plusmn 611

Gabor feature

SRC 1868 plusmn 423 3135 plusmn 640CRC 1731 plusmn 815 3083 plusmn 520SSRC 2130 plusmn 721 3367 plusmn 433PCRC 2582 plusmn 342 3960 plusmn 354

0

5

10

15

20

25

30

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(a) Single training sample

0

5

10

15

20

25

30

35

40

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 10001100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(b) 2 training samples

Figure 9 The performance of each dimension reduction method for GPCRC on LFW database

matrix proposed to deal with the high dimension in GPCRCmethod can effectively improve the computational efficiencyand computational speed and they were able to overcome thelimitations of the PCA method

Conflicts of Interest

None of the authors have conflicts of interest

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China (no 61672386) Humanities and Social

Sciences Planning Project of Ministry of Education (no16YJAZH071) Anhui Provincial Natural Science Foundationof China (no 1708085MF142) University Natural ScienceResearch Project of Anhui Province (no KJ2017A259) andProvincial Quality Project of Anhui Province (no 2016zy131)

References

[1] J Galbally S Marcel and J Fierrez ldquoBiometric antispoofingmethods a survey in face recognitionrdquo IEEE Access vol 2 pp1530ndash1552 2014

[2] S Nagpal M Singh R Singh and M Vatsa ldquoRegularizeddeep learning for face recognition with weight variationsrdquo IEEEAccess vol 3 pp 3010ndash3018 2015

12 Mathematical Problems in Engineering

[3] J W Tanaka and D Simonyi ldquoThe parts and wholes of facerecognition a review of the literaturerdquoThe Quarterly Journal ofExperimental Psychology no 10 pp 1ndash37 2016

[4] W Yang Z Wang and C Sun ldquoA collaborative representationbased projectionsmethod for feature extractionrdquoPattern Recog-nition vol 48 no 1 pp 20ndash27 2015

[5] O Boiman E Shechtman and M Irani ldquoIn defense of nearest-neighbor based image classificationrdquo in Proceedings of the 26thIEEE Conference on Computer Vision and Pattern Recognition(CVPR rsquo08) pp 1ndash8 June 2008

[6] K VenkataNarayana V Manoj and K KSwathi ldquoEnhancedface recognition based on PCA and SVMrdquo International Journalof Computer Applications vol 117 no 2 pp 40ndash42 2015

[7] E Owusu and Q R Mao ldquoAn svm cadaboost-based face detec-tion systemrdquo Journal of Experimental amp Theoretical ArtificialIntelligence vol 26 no 4 pp 477ndash491 2014

[8] B V Dasarathy ldquoNearest neighbor (NN) norms NN patternclassification techniquesrdquo Los Alamitos IEEE Computer SocietyPress vol 13 no 100 pp 21ndash27 1991

[9] Y Liu S S Ge C Li and Z You ldquo120581-NS a classifier by thedistance to the nearest subspacerdquo IEEE Transactions on NeuralNetworks and Learning Systems vol 22 no 8 pp 1256ndash12682011

[10] J Hamm and D D Lee ldquoGrassmann discriminant analysis aunifying view on subspace-based learningrdquo in Proceedings ofthe 25th International Conference onMachine Learning pp 376ndash383 July 2008

[11] R Wang S Shan X Chen and W Gao ldquoManifold-manifolddistance with application to face recognition based on imagesetrdquo in Proceedings of the 26th IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo08) pp 1ndash8 June 2008

[12] J Chien and C Wu ldquoDiscriminant waveletfaces and nearestfeature classifiers for face recognitionrdquo IEEE Transactions onPattern Analysis and Machine Intelligence vol 24 no 12 pp1644ndash1649 2002

[13] J Wright A Ganesh Z Zhou A Wagner and Y Ma ldquoDemorobust face recognition via sparse representationrdquo in Proceed-ings of the 8th IEEE International Conference on Automatic Faceamp Gesture Recognition (FG rsquo08) pp 1-2 IEEE Amsterdam TheNetherlands September 2008

[14] W Deng J Hu and J Guo ldquoIn defense of sparsity based facerecognitionrdquo in Proceedings of the 26th IEEE Conference onComputer Vision and Pattern Recognition (CVPR rsquo13) pp 399ndash406 June 2013

[15] L Zhang M Yang and X Feng ldquoSparse representation orcollaborative representation Which helps face recognitionrdquo inProceedings of the IEEE International Conference on ComputerVision (ICCV rsquo11) pp 471ndash478 Barcelona Spain November2011

[16] P Zhu L Zhang Q Hu and S C K Shiu ldquoMulti-scale patchbased collaborative representation for face recognition withmargin distribution optimizationrdquo in European Conference onComputer Vision vol 7572 of Lecture Notes in Computer Sciencepp 822ndash835 2012

[17] D Yang W Chen J Wang and Y Xu ldquoPatch based face recog-nition via fast collaborative representation based classificationand expression insensitive two-stage votingrdquo International Con-ference on Computational Science and Its Applications vol 9787pp 562ndash570 2016

[18] J Lai and X Jiang ldquoClasswise sparse and collaborative patchrepresentation for face recognitionrdquo IEEE Transactions onImage Processing vol 25 no 7 pp 3261ndash3272 2016

[19] M Yang and L Zhang ldquoGabor feature based sparse represen-tation for face recognition with gabor occlusion dictionaryrdquo inProceedings of the 11th European Conference on Computer Vision(ECCV rsquo10) pp 448ndash461 Springer Crete Greece 2010

[20] M R D Rodavia O Bernaldez and M Ballita ldquoWeb andmobile based facial recognition security system using Eigen-faces algorithmrdquo in Proceedings of the 2016 IEEE InternationalConference on Teaching Assessment and Learning for Engineer-ing (TALE rsquo16) pp 86ndash92 Bangkok Thailand December 2016

[21] P N Belhumeur J P Hespanha and D J Kriegman ldquoEigen-faces vs fisherfaces recognition using class specific linearprojectionrdquo IEEE Transactions on Pattern Analysis andMachineIntelligence vol 19 no 7 pp 711ndash720 1997

[22] L P Yang W G Gong W H Li and X H Gu ldquoRandomsampling subspace locality preserving projection for face recog-nitionrdquo Optics amp Precision Engineering vol 16 no 8 pp 1465ndash1470 2008

[23] X Sun M Chen and A Hauptmann ldquoAction recognition vialocal descriptors andholistic featuresrdquo inProceedings of the 2009IEEE Conference on Computer Vision and Pattern Recognition(CVPR rsquo09) pp 58ndash65 June 2009

[24] X Wang Y Zhang X Mu and F S Zhang ldquoThe facerecognition algorithm based on improved lbprdquo InternationalJournal of Optoelectronic Engineering vol 1 no 4 pp 76ndash802012

[25] J Ahmad U Ali and R J Qureshi ldquoFusion of thermal andvisual images for efficient face recognition using gabor filterrdquo inProceedings of the IEEE International Conference on ComputerSystems and Applications pp 135ndash139 March 2006

[26] V Struc R Gajsek and N Pavesic ldquoPrincipal gabor filters forface recognitionrdquo in Proceedings of the IEEE 3rd InternationalConference on Biometrics Theory Applications and Systems(BTAS rsquo09) pp 1ndash6 Washington DC USA September 2009

[27] M Li X Yu K H Ryu S Lee and N Theera-Umpon ldquoFacerecognition technology development with gabor PCA andSVM methodology under illumination normalization condi-tionrdquo Cluster Computing no 3 pp 1ndash10 2017

[28] J Yang D Zhang and A F Frangi ldquoTwo-dimensional PCAa new approach to apperence -based face representationand recognitionrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 26 no 1 pp 131ndash137 2004

[29] H Yu and J Yang ldquoA direct LDA algorithm for high-dimensional data with application to face recognitionrdquo PatternRecognition vol 34 no 10 pp 2067ndash2070 2001

[30] J Kim J Choi J Yi andMTurk ldquoEffective representation usingICA for face recognition robust to local distortion and partialocclusionrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 27 no 12 pp 1977ndash1981 2005

[31] C G Turhan andH S Bilge ldquoClass-wise two-dimensional PCAmethod for face recognitionrdquo IET Computer Vision vol 11 no4 pp 286ndash300 2017

[32] A Mashhoori and M Zolghadri Jahromi ldquoBlock-wise two-directional 2DPCA with ensemble learning for face recogni-tionrdquo Neurocomputing vol 108 pp 111ndash117 2013

[33] M P Rajath Kumar R Keerthi Sravan and K M AishwaryaldquoArtificial neural networks for face recognition using PCA andBPNNrdquo in Proceedings of the 35th IEEE Region 10 Conference(TENCON rsquo15) pp 1ndash6 IEEE Macao China November 2015

[34] E J Candes and T Tao ldquoNear-optimal signal recovery fromrandom projections universal encoding strategiesrdquo Institute ofElectrical and Electronics Engineers Transactions on InformationTheory vol 52 no 12 pp 5406ndash5425 2006

Mathematical Problems in Engineering 13

[35] W U Bajwa J D Haupt G M Raz S J Wright and R DNowak ldquoToeplitz-structured compressed sensing matricesrdquo inProceedings of the IEEESP 14th Workshop on Statistical SignalProcessing (SSP rsquo07) pp 294ndash298 IEEE Madison WI USAAugust 2007

[36] C Zhang H-R Yang and S Wei ldquoCompressive sensing basedon deterministic sparse toeplitz measurement matrices withrandom pitchrdquo Acta Automatica Sinica vol 38 no 8 pp 1362ndash1369 2012

[37] E S Takemura M R Petraglia and A Petraglia ldquoAn imagesuper-resolution algorithm based on wiener filtering andwavelet transformrdquo inProceedings of the 2013 IEEEDigital SignalProcessing and Signal Processing Education Meeting (DSPSPErsquo13) pp 130ndash134 August 2013

[38] M Yang Face Recognition via Sparse Representation WileyEncyclopedia of Electrical and Electronics Engineering 2008

[39] E J Candes J Romberg and T Tao ldquoRobust uncertaintyprinciples exact signal reconstruction from highly incompletefrequency informationrdquo Institute of Electrical and ElectronicsEngineers Transactions on InformationTheory vol 52 no 2 pp489ndash509 2006

[40] A S Georghiades P N Belhumeur and D J Kriegman ldquoFromfew to many illumination cone models for face recognitionunder variable lighting and poserdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 23 no 6 pp 643ndash6602001

[41] N McLaughlin J Ming and D Crookes ldquoLargest matchingareas for illumination and occlusion robust face recognitionrdquoIEEE Transactions on Cybernetics vol 47 no 3 pp 796ndash8082017

[42] Y Xu X Fang J Wu X Li and D Zhang ldquoDiscriminativetransfer subspace learning via low-rank and sparse representa-tionrdquo IEEE Transactions on Image Processing vol 25 no 2 pp850ndash863 2016

[43] H Roy and D Bhattacharjee ldquoLocal-gravity-face (lg-face) forillumination-invariant and heterogeneous face recognitionrdquoIEEE Transactions on Information Forensics and Security vol 11no 7 pp 1412ndash1424 2016

[44] Y Tai J Yang Y Zhang L Luo J Qian and Y ChenldquoFace recognition with pose variations and misalignment viaorthogonal procrustes regressionrdquo IEEE Transactions on ImageProcessing A Publication of the IEEE Signal Processing Societyvol 25 no 6 pp 2673ndash2683 2016

[45] G B Huang M Narayana and E Learned-Miller ldquoLabeledfaces in the wild a database forstudying face recognition inunconstrained environmentsrdquoMonth 2008

[46] LWolf T Hassner and Y Taigman ldquoSimilarity scores based onbackground samplesrdquo in Asian Conference on Computer Visionvol 5995 pp 88ndash97 2009

[47] J Wang Geometric Structure of High-Dimensional Data andDimensionality Reduction Springer 2012

[48] Q Wang J Li and Y Shen ldquoA survey on deterministicmeasurement matrix construction algorithms in compressivesensingrdquo Acta Electronica Sinica vol 41 no 10 pp 2041ndash20502013

[49] WYin SMorgan J Yang andY Zhang ldquoPractical compressivesensing with toeplitz and circulant matricesrdquo in Proceedingsof the Visual Communications and Image Processing 2010 vol7744 July 2010

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 5: SUBMITTED TO MATHEMATICAL PROBLEMS IN …downloads.hindawi.com/journals/mpe/aip/3025264.pdf · SUBMITTED TO MATHEMATICAL PROBLEMS IN ENGINEERING 1 Patch based Collaborative Representation

Mathematical Problems in Engineering 5

y x

measurements

Φ

Mtimes 1

Mtimes N

=

(a) The linear measurement process of measurementmatrix

The gabor feature of an image patch(N dimension)

Random Gaussian matrices

Toeplitz and cyclic matrices

Deterministic sparse Toeplitz

Deterministic sparse Toeplitz

Deterministic sparse Toeplitz

(Mtimes N)

(Mtimes N)

(Mtimes N)

(Mtimes N)

(Mtimes N)

matrices (Δ = 4)

matrices (Δ = 8)

matrices (Δ = 16)

(b) The structure of the measurement matrix

Figure 3 The operating mechanism of measurement matrix

assignment operations are performed on 119879 where the ele-ments 119886119894 (119894 isin Λ Λ is the (119873 + 119872 minus 1)Δ indexesrandomly chosen from the index sequence 1 rarr 119873 + 119872 minus1) obey the independent and identically distributed (iid)Random Gaussian distribution while the other elements are0 Finally we obtained the Deterministic sparse Toeplitzmatrices 119886119894+1119895+1 = 119886119894119895 according to the characteristics ofconstruction of the Toplitz and cyclic matrices

33 The Proposed Face Recognition Approach Although thePCRC can indeed solve the problem of small sample size thismethod is still based on the original feature of the patch andthe robustness and accuracy are yet to be improved Based onthe above analysis Gabor feature and measurement matrixare infused into the PCRC for FR which not only solves theproblemof small sample size but also enhances the robustnessand efficiency of the method

Our proposed method for FR is summarized as follows

Step 1 (input) Face training sample X and test sample y

Step 2 (patch) Divide the 119896 face training samples X into 119904patches and divide the test sample y into 119904 patches where y119894andM119894119895 are the position corresponding to the sample patches

X =[[[[[[[[[[[

M1M119894M119904

]]]]]]]]]]]

=

[[[[[[[[[[[[

M11 sdot sdot sdot M1119895 sdot sdot sdot M1119896

M1198941 sdot sdot sdot M119894119895 sdot sdot sdot M119894119896

M1199041 sdot sdot sdot M119904119895 sdot sdot sdot M119904119896

]]]]]]]]]]]]

y =[[[[[[[[[[[

y1y119894y119904

]]]]]]]]]]]

(9)

6 Mathematical Problems in Engineering

Step 3 (extract and measure features) (1) Extract Gaborfeature from patches of training samples and test samplerespectively to obtain

G =[[[[[[[[[[[

G1G119894G119904

]]]]]]]]]]]

=

[[[[[[[[[[[[

G11 sdot sdot sdot G1119895 sdot sdot sdot G1119896

G1198941 sdot sdot sdot G119894119895 sdot sdot sdot G119894119896

G1199041 sdot sdot sdot G119904119895 sdot sdot sdot G119904119896

]]]]]]]]]]]]

Gy =[[[[[[[[[[[

Gy1

Gy119894

Gy119904

]]]]]]]]]]]

(10)

(2) Carry out themeasurement and reduce the dimensionof each patch G119894119895 (119894 = 1 2 119904 119895 = 1 2 119896) using themeasurement matrixΦ to obtain measured signal

A =[[[[[[[[[[[

A1A119894A119904

]]]]]]]]]]]

=

[[[[[[[[[[[[

a11 sdot sdot sdot a1119895 sdot sdot sdot a1119896 a1198941 sdot sdot sdot a119894119895 sdot sdot sdot a119894119896 a1199041 sdot sdot sdot a119904119895 sdot sdot sdot a119904119896

]]]]]]]]]]]]

=

[[[[[[[[[[[[

ΦG11 sdot sdot sdot ΦG1119895 sdot sdot sdot ΦG1119896 ΦG1198941 sdot sdot sdot ΦG119894119895 sdot sdot sdot ΦG119894119896 ΦG1199041 sdot sdot sdot ΦG119904119895 sdot sdot sdot ΦG119904119896

]]]]]]]]]]]]

B =[[[[[[[[[[[

b1b119894b119904

]]]]]]]]]]]

=[[[[[[[[[[[

ΦGy1ΦGy119894ΦGy119904

]]]]]]]]]]]

(11)

Step 4 Collaborative representation of each patch measure-ment signal bi of the test sample over the training samplemeasurement signal local dictionary Ai

119894 = argmin120588119894

1003817100381710038171003817b119894 minus A119894120588119894100381710038171003817100381722 + 120582 1003817100381710038171003817120588119894100381710038171003817100381722 119894 = 1 2 119904 (12)

Equation (12) can be easily derived as follows

119894 = Pb119894 P = (A119879119894 A119894 + 120582I)minus1 A119879119894 119894 = 1 2 119904 (13)

where I is unit matrix

Step 5 Recognition results of patch y119894 are

119903119894119895 (yi) =10038171003817100381710038171003817bi minus aij119894

100381710038171003817100381710038172100381710038171003817100381711989410038171003817100381710038172119894 = 1 2 119904 119895 = 1 2 119896

ID (119903119894119895) = argmin119895

[119903119894119895 (yi)] (14)

Step 6 Use voting to obtain the final recognition result

4 Experimental Analysis

The proposed approaches GPCRC and its improved meth-ods were evaluated in three publicly available databasesExtended Yale B [40ndash42] CMU PIE [43 44] and LFW [4546] To show the effectiveness of the GPCRC we comparedour methods with four classical methods and their improvedmethods namely SRC [13] CRC [15] SSRC [14] and PCRC[16]Then we tested our GPCRC using several measurementmatrix namely Random Gaussian matrices Toeplitz andcyclic matrices and Deterministic sparse Toeplitz matrices(Δ = 4 8 16) to reduce the dimension of the transformedsignal and compare their performance with the conventionalPCA methods In all the following experiments we ran theMATLAB 2015b on a typical Intel(R) Core(TM) i5-3470CPU 320GHzWindows7 x64 PC In our implementation ofGabor filters the parameters were set as 119891 = radic2 119870max =1205872 120590 = 120587 120583 = 0 1 7 ] = 0 1 4 for all theexperiments below All the programs were run 20 times oneach database and the mean and variance of each methodwere reported In order to report the best result of eachmethod the parameter120582used in SRC SSRCCRC andPCRCwas set to 0001 [13] 0001 [14] 0005 [15] and 0001 [16 17]respectively The patch sizes were used in PCRC and GPCRCand its improved methods are 8 times 8 [16 17] and 16 times 16respectively

41 Extended Yale B To test the robustness of the proposedmethod on illumination we used the classic Extended YaleB database [40ndash42] because faces from Extended Yale Bdatabase were acquired in different illumination conditionsThe Extended Yale B database contains 38 human subjectsunder 9 poses and 64 illumination conditions For obviouscomparison all frontal-face images marked with P00 wereused in our experiment and the face images size was down-sampled to 32 times 32 10 20 faces from each individual wereselected randomly as training sample 30 others from eachindividual were selected as test sample Figure 4 shows someP00 marked samples from the Extended Yale B databaseThe experimental results of each method are shown inTable 1 In Figure 5 we compared the recognition rates of

Mathematical Problems in Engineering 7

Table 1 Recognition rate () on Extended Yale B database

Features Methods Training sample size10 20

Intensity feature

SRC 7615 plusmn 960 8050 plusmn 432CRC 7860 plusmn 1058 8300 plusmn 664SSRC 8753 plusmn 791 9001 plusmn 438PCRC 9044 plusmn 109 9425 plusmn 101

LBP feature

SRC 8356 plusmn 1132 8974 plusmn 952CRC 8302 plusmn 1019 9030 plusmn 327SSRC 9058 plusmn 639 9589 plusmn 301PCRC 7967 plusmn 1257 8725 plusmn 508

Gabor feature

SRC 8525 plusmn 536 9150 plusmn 350CRC 8618 plusmn 443 9000 plusmn 280SSRC 8706 plusmn 949 9224 plusmn 506PCRC 9434 plusmn 206 9883 plusmn 094

Figure 4 Some marked samples of the Extended Yale B database

Feature Dimension

101520253035404550556065707580859095

100

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(a) 10 training samples

Reco

gniti

on ra

te (

)

Feature Dimension

05

101520253035404550556065707580859095

100

0 100 200 300 400 500 600 700 800 900 10001100

Random Gaussian matricesToeplitz and cyclic matrices

PCA

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(b) 20 training samples

Figure 5 The performance of each dimension reduction method for GPCRC on Extended Yale B database

8 Mathematical Problems in Engineering

Table 2 The speed of each dimension reduction methods

Method Random Gaussian Toeplitz and cyclic PCATime(s) 11131 10836 417295Method Deterministic sparse Toeplitz

(Δ = 4) (Δ = 8) (Δ = 16)Time(s) 12643 12226 12678

Figure 6 Some marked samples of the CMU PIE database

various methods with the space dimensions 32 64 128256 512 and 1024 for each patch in GPCRC and thosenumbers correspond to downsampling ratios of 1320 1160180 140 120 and 110 respectively In Table 1 it can beclearly seen that GPCRC achieves the highest recognitionrate performance in all experiments In Figure 6 we cansee the performance of various dimensionality reductionmethods for GPCRC 10 samples of each individual inFigure 5(a) were used as training samples When the featuredimension is low (le256) the best performance of PCA is9280 (256 dimension) which is significantly higher thanthat of the measurement matrix at 256 dimension RandomGaussian matrices are 8877 Toeplitz and cyclic matricesare 8517 Deterministic sparse Toeplitz matrices (Δ = 4)are 8667 Deterministic sparse Toeplitz matrices (Δ = 8)are 8569 and Deterministic sparse Toeplitz matrices (Δ =16) are 8605 but none of these has been as good as theperformance of the original dimension (10240 dimension)So the reason why the performance of PCA is significantlybetter than the measurement matrix at low dimension canbe analyzed through their operation mechanism the PCAtransforms the original data into a set of linearly independentrepresentations of each dimension in a linear transformationwhich can extract the principal component of the dataThese principal components can represent image signalsmore accurately [31ndash33] The theory of compressed sensing[34ndash36] states that when the signal is sparse or compressiblein a transform domain the measurement matrix whichis noncoherent with the transform matrix can be used toproject transform coefficients to the low-dimensional vec-tor and this projection can maintain the information forsignal reconstruction The compressed sensing technologycan achieve the reconstruction with high accuracy or highprobability using small number of projection data In the caseof extremely low dimension the reconstruction informationis insufficient and the original signal cannot be reconstructedaccurately So the recognition rate is not high When thedimension is increased and the reconstructed informationcan more accurately reconstruct the original signal the

recognition rate will improve When the feature dimension ishigher (ge512) PCA cannot work because of the sample sizeAt 512 dimension the measurement matrix (Deterministicsparse Toeplitz matrices (Δ = 8) are 9521 Deterministicsparse Toeplitz matrices (Δ = 16) are 9491) has reached theperformance of the original dimension (10240 dimension)and the dimension is only 120 of the original dimension Atthe 110 of the original dimension the performance of themeasurement matrix has basically achieved the performanceof the original dimension Random Gaussian matrices are9560 Deterministic sparse Toeplitz matrices (Δ = 4) are9464 Deterministic sparse Toeplitz matrices (Δ = 8) are9456 and Deterministic sparse Toeplitz matrices (Δ = 16)are 9498 In Figure 6(b) 20 samples of each individualwere used as training samples When the feature dimensionle 512 the best performance is PCA which is 9750 (512dimension) At 1024 dimension PCA cannot work andthe performance of the measurement matrix has basicallyachieved the performance of the original dimension RandomGaussian matrices are 9880 Deterministic sparse Toeplitzmatrices (Δ = 4) are 9868 Deterministic sparse Toeplitzmatrices (Δ = 8) are 9901 and Deterministic sparseToeplitzmatrices (Δ = 16) are 9877 In Table 2 we fixed thedimension of each reductionmethods at 512 dimension in thecase of 20 training samples In general the complexity of PCAis119874(1198993) [47] where 119899 is the number of rows of the covariancematrix The example we provided consists of 38 individualsand each individual contains 20 samples Since the numberof Gabor features (10240) for each patch far outweighs thetotal number of training samples (760) the number of rowsof the covariancematrix is determined by the total number oftraining samples and the complexity of PCA is approximately119874(7603) However in the proposed measurement matrixalgorithm the measurement matrix is formed by a certaindeterministic distribution (eg Gaussian distribution) andstructure according to the required dimension and length ofthe signal so that the measurement matrix demonstrates lowcomplexity [36 48 49] The actual speed of each dimensionreduction methods for one patch of all samples (including alltraining samples and all testing samples) is listed Based onthe above analysis we can see that the PCA method has themost outstanding performance but it is limited by the samplesize and it is time-consumingThemeasurement matrix doesnot perform well in low dimension but is not limited by thesample size When the dimension reaches a certain value itsperformance was the same as that of the original dimensionThus the dimension of the data can be reduced without lossof recognition rate

Mathematical Problems in Engineering 9

15

20

25

30

35

40

45

50

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(a) 2 training samples

10

15

20

25

30

35

40

45

50

55

60

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 10001100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(b) 3 training samples

05

10152025303540455055606570

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 10001100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(c) 4 training samples

05

101520253035404550556065707580

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(d) 5 training samples

Figure 7 The performance of each dimension reduction method for GPCRC on CMU PIE database

42 CMU PIE In order to further test the robustness inillumination pose and expression we utilized a currentlypopular database CMU PIE [43 44] a database consistingof 41368 images of 68 people for each person there are 13different poses 43 different illumination conditions and 4different expressions To testify the advantage of our methodin small sample size condition we randomly selected 2 3 4

and 5 face samples from each individual as training sampleand other 40 face samples from each individual as test sam-plesThe face images size is downsampled to 32times32 Figure 6shows some marked samples of the CMU PIE database Theexperimental results of each method are shown in Table 3GPCRC has the best results In Figure 7 we compared therecognition rates of various dimension reduction methods

10 Mathematical Problems in Engineering

Table 3 Recognition rate () on CMU PIE database

Features Method Training sample size2 3 4 5

Intensity feature

SRC 2821 plusmn 543 3936 plusmn 982 4473 plusmn 1529 6054 plusmn 645CRC 3034 plusmn 615 4062 plusmn 920 4435 plusmn 742 6103 plusmn 782SSRC 3752 plusmn 834 5474 plusmn 850 5732 plusmn 982 6673 plusmn 885PCRC 4232 plusmn 117 5646 plusmn 188 6116 plusmn 291 7047 plusmn 387

LBP feature

SRC 3842 plusmn 646 5337 plusmn 710 5367 plusmn 521 6331 plusmn 724CRC 3732 plusmn 775 5483 plusmn 615 5427 plusmn 831 6425 plusmn 690SSRC 4317 plusmn 571 5684 plusmn 921 6453 plusmn 476 6825 plusmn 435PCRC 4121 plusmn 710 5221 plusmn 1176 6254 plusmn 621 7022 plusmn 307

Gabor feature

SRC 4143 plusmn 420 5336 plusmn 540 5358 plusmn 732 6679 plusmn 545CRC 4025 plusmn 515 5535 plusmn 328 5668 plusmn 742 6624 plusmn 618SSRC 4455 plusmn 600 5596 plusmn 620 6250 plusmn 316 7041 plusmn 479PCRC 4857 plusmn 129 5845 plusmn 194 6882 plusmn 256 7318 plusmn 244

for GPCRC Similarly we can see that PCA cannot achievethe best recognition rate because of the sample size limitand at 1024 dimension the performance of the measurementmatrix has basically achieved the performance of the originaldimensionThe recognition rate of eachmeasurementmatrixis shown below

(i) Random Gaussian matrices are 4873 (2 trainingsamples) 5966 (3 training samples) 6886 (4training samples) and 7360 (5 training samples)

(ii) Toeplitz and cyclic matrices are 4889 (2 trainingsamples) 5830 (3 training samples) 6544 (4training samples) and 7312 (5 training samples)

(iii) Deterministic sparse Toeplitz matrices (Δ = 4)are 4763 (2 training samples) 5868 (3 trainingsamples) 6703 (4 training samples) and 7178 (5training samples)

(iv) Deterministic sparse Toeplitz matrices (Δ = 8)are 4869 (2 training samples) 5855 (3 trainingsamples) 6891 (4 training samples) and 7427 (5training samples)

(v) Deterministic sparse Toeplitz matrices (Δ = 16)are 4694 (2 training samples) 6000 (3 trainingsamples) 6625 (4 training samples) and 7169 (5training samples)

Through the above analysis we further validated the feasibil-ity of the measurement matrix in the premise of ensuring therecognition rate

43 LFW In practice the number of training samples is verylimited and only one or two can be obtained from individualidentity documents To simulate the actual situationwe selectthe LFWdatabaseThe LFWdatabase includes frontal imagesof 5749 different subjects in unconstrained environment[45] LFW-a is a version of LFW after alignment using com-mercial face alignment software [46] We chose 158 subjectsfrom LFW-a each subject contains no less than ten samplesFor each subject we randomly choose 1 to 2 samples fromeach individual for training and another 5 samples from each

Figure 8 Some marked samples of the LFW database

individual for testing The face images size is downsampledto 32 times 32 Figure 8 shows some marked samples of the LFWdatabaseThe experimental results of eachmethod are shownin Table 4 it can be clearly seen that the GPCRC achievesthe highest recognition rate performance in all experimentswith the training sample size from 1 to 2 In Figure 9 wecan also see the advantages of the measurement matrix insmall sample size At 1024 dimension the performance ofthe measurement matrix in the single training sample isas follows Random Gaussian matrices are 2618 Toeplitzand cyclic matrices are 2468 Deterministic sparse Toeplitzmatrices (Δ = 4) are 2443 Deterministic sparse Toeplitzmatrices (Δ = 8) are 2585 and Deterministic sparseToeplitz matrices (Δ = 16) are 2528 and the performancesof the measurement matrix in the two training samples areas follows Random Gaussian matrices are 3968 Toeplitzand cyclic matrices are 3772 Deterministic sparse Toeplitzmatrices (Δ = 4) are 3756 Deterministic sparse Toeplitzmatrices (Δ = 8) are 3991 and Deterministic sparseToeplitz matrices (Δ = 16) are 3825

5 Conclusion

In order to alleviate the influence of unreliability environmenton small sample size in FR in this paper we applied improvedmethod for Gabor local features to PCRC we proposedto use the measurement matrices to reduce the dimensionof the transformed signal Several important observationscan be summarized as follows (1) The proposed GPCPCmethod can effectively deal with the influence of unreliabilityenvironment in small sample FR (2) The measurement

Mathematical Problems in Engineering 11

Table 4 Recognition rate () on LFW database

Features Method Training sample size1 2

Intensity feature

SRC 1058 plusmn 720 1902 plusmn 829CRC 908 plusmn 815 2039 plusmn 720SSRC 1703 plusmn 587 2852 plusmn 750PCRC 2229 plusmn 396 3329 plusmn 528

LBP feature

SRC 1552 plusmn 554 2531 plusmn 742CRC 1623 plusmn 621 2622 plusmn 657SSRC 1916 plusmn 341 3210 plusmn 426PCRC 2126 plusmn 473 3159 plusmn 611

Gabor feature

SRC 1868 plusmn 423 3135 plusmn 640CRC 1731 plusmn 815 3083 plusmn 520SSRC 2130 plusmn 721 3367 plusmn 433PCRC 2582 plusmn 342 3960 plusmn 354

0

5

10

15

20

25

30

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(a) Single training sample

0

5

10

15

20

25

30

35

40

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 10001100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(b) 2 training samples

Figure 9 The performance of each dimension reduction method for GPCRC on LFW database

matrix proposed to deal with the high dimension in GPCRCmethod can effectively improve the computational efficiencyand computational speed and they were able to overcome thelimitations of the PCA method

Conflicts of Interest

None of the authors have conflicts of interest

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China (no 61672386) Humanities and Social

Sciences Planning Project of Ministry of Education (no16YJAZH071) Anhui Provincial Natural Science Foundationof China (no 1708085MF142) University Natural ScienceResearch Project of Anhui Province (no KJ2017A259) andProvincial Quality Project of Anhui Province (no 2016zy131)

References

[1] J Galbally S Marcel and J Fierrez ldquoBiometric antispoofingmethods a survey in face recognitionrdquo IEEE Access vol 2 pp1530ndash1552 2014

[2] S Nagpal M Singh R Singh and M Vatsa ldquoRegularizeddeep learning for face recognition with weight variationsrdquo IEEEAccess vol 3 pp 3010ndash3018 2015

12 Mathematical Problems in Engineering

[3] J W Tanaka and D Simonyi ldquoThe parts and wholes of facerecognition a review of the literaturerdquoThe Quarterly Journal ofExperimental Psychology no 10 pp 1ndash37 2016

[4] W Yang Z Wang and C Sun ldquoA collaborative representationbased projectionsmethod for feature extractionrdquoPattern Recog-nition vol 48 no 1 pp 20ndash27 2015

[5] O Boiman E Shechtman and M Irani ldquoIn defense of nearest-neighbor based image classificationrdquo in Proceedings of the 26thIEEE Conference on Computer Vision and Pattern Recognition(CVPR rsquo08) pp 1ndash8 June 2008

[6] K VenkataNarayana V Manoj and K KSwathi ldquoEnhancedface recognition based on PCA and SVMrdquo International Journalof Computer Applications vol 117 no 2 pp 40ndash42 2015

[7] E Owusu and Q R Mao ldquoAn svm cadaboost-based face detec-tion systemrdquo Journal of Experimental amp Theoretical ArtificialIntelligence vol 26 no 4 pp 477ndash491 2014

[8] B V Dasarathy ldquoNearest neighbor (NN) norms NN patternclassification techniquesrdquo Los Alamitos IEEE Computer SocietyPress vol 13 no 100 pp 21ndash27 1991

[9] Y Liu S S Ge C Li and Z You ldquo120581-NS a classifier by thedistance to the nearest subspacerdquo IEEE Transactions on NeuralNetworks and Learning Systems vol 22 no 8 pp 1256ndash12682011

[10] J Hamm and D D Lee ldquoGrassmann discriminant analysis aunifying view on subspace-based learningrdquo in Proceedings ofthe 25th International Conference onMachine Learning pp 376ndash383 July 2008

[11] R Wang S Shan X Chen and W Gao ldquoManifold-manifolddistance with application to face recognition based on imagesetrdquo in Proceedings of the 26th IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo08) pp 1ndash8 June 2008

[12] J Chien and C Wu ldquoDiscriminant waveletfaces and nearestfeature classifiers for face recognitionrdquo IEEE Transactions onPattern Analysis and Machine Intelligence vol 24 no 12 pp1644ndash1649 2002

[13] J Wright A Ganesh Z Zhou A Wagner and Y Ma ldquoDemorobust face recognition via sparse representationrdquo in Proceed-ings of the 8th IEEE International Conference on Automatic Faceamp Gesture Recognition (FG rsquo08) pp 1-2 IEEE Amsterdam TheNetherlands September 2008

[14] W Deng J Hu and J Guo ldquoIn defense of sparsity based facerecognitionrdquo in Proceedings of the 26th IEEE Conference onComputer Vision and Pattern Recognition (CVPR rsquo13) pp 399ndash406 June 2013

[15] L Zhang M Yang and X Feng ldquoSparse representation orcollaborative representation Which helps face recognitionrdquo inProceedings of the IEEE International Conference on ComputerVision (ICCV rsquo11) pp 471ndash478 Barcelona Spain November2011

[16] P Zhu L Zhang Q Hu and S C K Shiu ldquoMulti-scale patchbased collaborative representation for face recognition withmargin distribution optimizationrdquo in European Conference onComputer Vision vol 7572 of Lecture Notes in Computer Sciencepp 822ndash835 2012

[17] D Yang W Chen J Wang and Y Xu ldquoPatch based face recog-nition via fast collaborative representation based classificationand expression insensitive two-stage votingrdquo International Con-ference on Computational Science and Its Applications vol 9787pp 562ndash570 2016

[18] J Lai and X Jiang ldquoClasswise sparse and collaborative patchrepresentation for face recognitionrdquo IEEE Transactions onImage Processing vol 25 no 7 pp 3261ndash3272 2016

[19] M Yang and L Zhang ldquoGabor feature based sparse represen-tation for face recognition with gabor occlusion dictionaryrdquo inProceedings of the 11th European Conference on Computer Vision(ECCV rsquo10) pp 448ndash461 Springer Crete Greece 2010

[20] M R D Rodavia O Bernaldez and M Ballita ldquoWeb andmobile based facial recognition security system using Eigen-faces algorithmrdquo in Proceedings of the 2016 IEEE InternationalConference on Teaching Assessment and Learning for Engineer-ing (TALE rsquo16) pp 86ndash92 Bangkok Thailand December 2016

[21] P N Belhumeur J P Hespanha and D J Kriegman ldquoEigen-faces vs fisherfaces recognition using class specific linearprojectionrdquo IEEE Transactions on Pattern Analysis andMachineIntelligence vol 19 no 7 pp 711ndash720 1997

[22] L P Yang W G Gong W H Li and X H Gu ldquoRandomsampling subspace locality preserving projection for face recog-nitionrdquo Optics amp Precision Engineering vol 16 no 8 pp 1465ndash1470 2008

[23] X Sun M Chen and A Hauptmann ldquoAction recognition vialocal descriptors andholistic featuresrdquo inProceedings of the 2009IEEE Conference on Computer Vision and Pattern Recognition(CVPR rsquo09) pp 58ndash65 June 2009

[24] X Wang Y Zhang X Mu and F S Zhang ldquoThe facerecognition algorithm based on improved lbprdquo InternationalJournal of Optoelectronic Engineering vol 1 no 4 pp 76ndash802012

[25] J Ahmad U Ali and R J Qureshi ldquoFusion of thermal andvisual images for efficient face recognition using gabor filterrdquo inProceedings of the IEEE International Conference on ComputerSystems and Applications pp 135ndash139 March 2006

[26] V Struc R Gajsek and N Pavesic ldquoPrincipal gabor filters forface recognitionrdquo in Proceedings of the IEEE 3rd InternationalConference on Biometrics Theory Applications and Systems(BTAS rsquo09) pp 1ndash6 Washington DC USA September 2009

[27] M Li X Yu K H Ryu S Lee and N Theera-Umpon ldquoFacerecognition technology development with gabor PCA andSVM methodology under illumination normalization condi-tionrdquo Cluster Computing no 3 pp 1ndash10 2017

[28] J Yang D Zhang and A F Frangi ldquoTwo-dimensional PCAa new approach to apperence -based face representationand recognitionrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 26 no 1 pp 131ndash137 2004

[29] H Yu and J Yang ldquoA direct LDA algorithm for high-dimensional data with application to face recognitionrdquo PatternRecognition vol 34 no 10 pp 2067ndash2070 2001

[30] J Kim J Choi J Yi andMTurk ldquoEffective representation usingICA for face recognition robust to local distortion and partialocclusionrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 27 no 12 pp 1977ndash1981 2005

[31] C G Turhan andH S Bilge ldquoClass-wise two-dimensional PCAmethod for face recognitionrdquo IET Computer Vision vol 11 no4 pp 286ndash300 2017

[32] A Mashhoori and M Zolghadri Jahromi ldquoBlock-wise two-directional 2DPCA with ensemble learning for face recogni-tionrdquo Neurocomputing vol 108 pp 111ndash117 2013

[33] M P Rajath Kumar R Keerthi Sravan and K M AishwaryaldquoArtificial neural networks for face recognition using PCA andBPNNrdquo in Proceedings of the 35th IEEE Region 10 Conference(TENCON rsquo15) pp 1ndash6 IEEE Macao China November 2015

[34] E J Candes and T Tao ldquoNear-optimal signal recovery fromrandom projections universal encoding strategiesrdquo Institute ofElectrical and Electronics Engineers Transactions on InformationTheory vol 52 no 12 pp 5406ndash5425 2006

Mathematical Problems in Engineering 13

[35] W U Bajwa J D Haupt G M Raz S J Wright and R DNowak ldquoToeplitz-structured compressed sensing matricesrdquo inProceedings of the IEEESP 14th Workshop on Statistical SignalProcessing (SSP rsquo07) pp 294ndash298 IEEE Madison WI USAAugust 2007

[36] C Zhang H-R Yang and S Wei ldquoCompressive sensing basedon deterministic sparse toeplitz measurement matrices withrandom pitchrdquo Acta Automatica Sinica vol 38 no 8 pp 1362ndash1369 2012

[37] E S Takemura M R Petraglia and A Petraglia ldquoAn imagesuper-resolution algorithm based on wiener filtering andwavelet transformrdquo inProceedings of the 2013 IEEEDigital SignalProcessing and Signal Processing Education Meeting (DSPSPErsquo13) pp 130ndash134 August 2013

[38] M Yang Face Recognition via Sparse Representation WileyEncyclopedia of Electrical and Electronics Engineering 2008

[39] E J Candes J Romberg and T Tao ldquoRobust uncertaintyprinciples exact signal reconstruction from highly incompletefrequency informationrdquo Institute of Electrical and ElectronicsEngineers Transactions on InformationTheory vol 52 no 2 pp489ndash509 2006

[40] A S Georghiades P N Belhumeur and D J Kriegman ldquoFromfew to many illumination cone models for face recognitionunder variable lighting and poserdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 23 no 6 pp 643ndash6602001

[41] N McLaughlin J Ming and D Crookes ldquoLargest matchingareas for illumination and occlusion robust face recognitionrdquoIEEE Transactions on Cybernetics vol 47 no 3 pp 796ndash8082017

[42] Y Xu X Fang J Wu X Li and D Zhang ldquoDiscriminativetransfer subspace learning via low-rank and sparse representa-tionrdquo IEEE Transactions on Image Processing vol 25 no 2 pp850ndash863 2016

[43] H Roy and D Bhattacharjee ldquoLocal-gravity-face (lg-face) forillumination-invariant and heterogeneous face recognitionrdquoIEEE Transactions on Information Forensics and Security vol 11no 7 pp 1412ndash1424 2016

[44] Y Tai J Yang Y Zhang L Luo J Qian and Y ChenldquoFace recognition with pose variations and misalignment viaorthogonal procrustes regressionrdquo IEEE Transactions on ImageProcessing A Publication of the IEEE Signal Processing Societyvol 25 no 6 pp 2673ndash2683 2016

[45] G B Huang M Narayana and E Learned-Miller ldquoLabeledfaces in the wild a database forstudying face recognition inunconstrained environmentsrdquoMonth 2008

[46] LWolf T Hassner and Y Taigman ldquoSimilarity scores based onbackground samplesrdquo in Asian Conference on Computer Visionvol 5995 pp 88ndash97 2009

[47] J Wang Geometric Structure of High-Dimensional Data andDimensionality Reduction Springer 2012

[48] Q Wang J Li and Y Shen ldquoA survey on deterministicmeasurement matrix construction algorithms in compressivesensingrdquo Acta Electronica Sinica vol 41 no 10 pp 2041ndash20502013

[49] WYin SMorgan J Yang andY Zhang ldquoPractical compressivesensing with toeplitz and circulant matricesrdquo in Proceedingsof the Visual Communications and Image Processing 2010 vol7744 July 2010

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 6: SUBMITTED TO MATHEMATICAL PROBLEMS IN …downloads.hindawi.com/journals/mpe/aip/3025264.pdf · SUBMITTED TO MATHEMATICAL PROBLEMS IN ENGINEERING 1 Patch based Collaborative Representation

6 Mathematical Problems in Engineering

Step 3 (extract and measure features) (1) Extract Gaborfeature from patches of training samples and test samplerespectively to obtain

G =[[[[[[[[[[[

G1G119894G119904

]]]]]]]]]]]

=

[[[[[[[[[[[[

G11 sdot sdot sdot G1119895 sdot sdot sdot G1119896

G1198941 sdot sdot sdot G119894119895 sdot sdot sdot G119894119896

G1199041 sdot sdot sdot G119904119895 sdot sdot sdot G119904119896

]]]]]]]]]]]]

Gy =[[[[[[[[[[[

Gy1

Gy119894

Gy119904

]]]]]]]]]]]

(10)

(2) Carry out themeasurement and reduce the dimensionof each patch G119894119895 (119894 = 1 2 119904 119895 = 1 2 119896) using themeasurement matrixΦ to obtain measured signal

A =[[[[[[[[[[[

A1A119894A119904

]]]]]]]]]]]

=

[[[[[[[[[[[[

a11 sdot sdot sdot a1119895 sdot sdot sdot a1119896 a1198941 sdot sdot sdot a119894119895 sdot sdot sdot a119894119896 a1199041 sdot sdot sdot a119904119895 sdot sdot sdot a119904119896

]]]]]]]]]]]]

=

[[[[[[[[[[[[

ΦG11 sdot sdot sdot ΦG1119895 sdot sdot sdot ΦG1119896 ΦG1198941 sdot sdot sdot ΦG119894119895 sdot sdot sdot ΦG119894119896 ΦG1199041 sdot sdot sdot ΦG119904119895 sdot sdot sdot ΦG119904119896

]]]]]]]]]]]]

B =[[[[[[[[[[[

b1b119894b119904

]]]]]]]]]]]

=[[[[[[[[[[[

ΦGy1ΦGy119894ΦGy119904

]]]]]]]]]]]

(11)

Step 4 Collaborative representation of each patch measure-ment signal bi of the test sample over the training samplemeasurement signal local dictionary Ai

119894 = argmin120588119894

1003817100381710038171003817b119894 minus A119894120588119894100381710038171003817100381722 + 120582 1003817100381710038171003817120588119894100381710038171003817100381722 119894 = 1 2 119904 (12)

Equation (12) can be easily derived as follows

119894 = Pb119894 P = (A119879119894 A119894 + 120582I)minus1 A119879119894 119894 = 1 2 119904 (13)

where I is unit matrix

Step 5 Recognition results of patch y119894 are

119903119894119895 (yi) =10038171003817100381710038171003817bi minus aij119894

100381710038171003817100381710038172100381710038171003817100381711989410038171003817100381710038172119894 = 1 2 119904 119895 = 1 2 119896

ID (119903119894119895) = argmin119895

[119903119894119895 (yi)] (14)

Step 6 Use voting to obtain the final recognition result

4 Experimental Analysis

The proposed approaches GPCRC and its improved meth-ods were evaluated in three publicly available databasesExtended Yale B [40ndash42] CMU PIE [43 44] and LFW [4546] To show the effectiveness of the GPCRC we comparedour methods with four classical methods and their improvedmethods namely SRC [13] CRC [15] SSRC [14] and PCRC[16]Then we tested our GPCRC using several measurementmatrix namely Random Gaussian matrices Toeplitz andcyclic matrices and Deterministic sparse Toeplitz matrices(Δ = 4 8 16) to reduce the dimension of the transformedsignal and compare their performance with the conventionalPCA methods In all the following experiments we ran theMATLAB 2015b on a typical Intel(R) Core(TM) i5-3470CPU 320GHzWindows7 x64 PC In our implementation ofGabor filters the parameters were set as 119891 = radic2 119870max =1205872 120590 = 120587 120583 = 0 1 7 ] = 0 1 4 for all theexperiments below All the programs were run 20 times oneach database and the mean and variance of each methodwere reported In order to report the best result of eachmethod the parameter120582used in SRC SSRCCRC andPCRCwas set to 0001 [13] 0001 [14] 0005 [15] and 0001 [16 17]respectively The patch sizes were used in PCRC and GPCRCand its improved methods are 8 times 8 [16 17] and 16 times 16respectively

41 Extended Yale B To test the robustness of the proposedmethod on illumination we used the classic Extended YaleB database [40ndash42] because faces from Extended Yale Bdatabase were acquired in different illumination conditionsThe Extended Yale B database contains 38 human subjectsunder 9 poses and 64 illumination conditions For obviouscomparison all frontal-face images marked with P00 wereused in our experiment and the face images size was down-sampled to 32 times 32 10 20 faces from each individual wereselected randomly as training sample 30 others from eachindividual were selected as test sample Figure 4 shows someP00 marked samples from the Extended Yale B databaseThe experimental results of each method are shown inTable 1 In Figure 5 we compared the recognition rates of

Mathematical Problems in Engineering 7

Table 1 Recognition rate () on Extended Yale B database

Features Methods Training sample size10 20

Intensity feature

SRC 7615 plusmn 960 8050 plusmn 432CRC 7860 plusmn 1058 8300 plusmn 664SSRC 8753 plusmn 791 9001 plusmn 438PCRC 9044 plusmn 109 9425 plusmn 101

LBP feature

SRC 8356 plusmn 1132 8974 plusmn 952CRC 8302 plusmn 1019 9030 plusmn 327SSRC 9058 plusmn 639 9589 plusmn 301PCRC 7967 plusmn 1257 8725 plusmn 508

Gabor feature

SRC 8525 plusmn 536 9150 plusmn 350CRC 8618 plusmn 443 9000 plusmn 280SSRC 8706 plusmn 949 9224 plusmn 506PCRC 9434 plusmn 206 9883 plusmn 094

Figure 4 Some marked samples of the Extended Yale B database

Feature Dimension

101520253035404550556065707580859095

100

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(a) 10 training samples

Reco

gniti

on ra

te (

)

Feature Dimension

05

101520253035404550556065707580859095

100

0 100 200 300 400 500 600 700 800 900 10001100

Random Gaussian matricesToeplitz and cyclic matrices

PCA

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(b) 20 training samples

Figure 5 The performance of each dimension reduction method for GPCRC on Extended Yale B database

8 Mathematical Problems in Engineering

Table 2 The speed of each dimension reduction methods

Method Random Gaussian Toeplitz and cyclic PCATime(s) 11131 10836 417295Method Deterministic sparse Toeplitz

(Δ = 4) (Δ = 8) (Δ = 16)Time(s) 12643 12226 12678

Figure 6 Some marked samples of the CMU PIE database

various methods with the space dimensions 32 64 128256 512 and 1024 for each patch in GPCRC and thosenumbers correspond to downsampling ratios of 1320 1160180 140 120 and 110 respectively In Table 1 it can beclearly seen that GPCRC achieves the highest recognitionrate performance in all experiments In Figure 6 we cansee the performance of various dimensionality reductionmethods for GPCRC 10 samples of each individual inFigure 5(a) were used as training samples When the featuredimension is low (le256) the best performance of PCA is9280 (256 dimension) which is significantly higher thanthat of the measurement matrix at 256 dimension RandomGaussian matrices are 8877 Toeplitz and cyclic matricesare 8517 Deterministic sparse Toeplitz matrices (Δ = 4)are 8667 Deterministic sparse Toeplitz matrices (Δ = 8)are 8569 and Deterministic sparse Toeplitz matrices (Δ =16) are 8605 but none of these has been as good as theperformance of the original dimension (10240 dimension)So the reason why the performance of PCA is significantlybetter than the measurement matrix at low dimension canbe analyzed through their operation mechanism the PCAtransforms the original data into a set of linearly independentrepresentations of each dimension in a linear transformationwhich can extract the principal component of the dataThese principal components can represent image signalsmore accurately [31ndash33] The theory of compressed sensing[34ndash36] states that when the signal is sparse or compressiblein a transform domain the measurement matrix whichis noncoherent with the transform matrix can be used toproject transform coefficients to the low-dimensional vec-tor and this projection can maintain the information forsignal reconstruction The compressed sensing technologycan achieve the reconstruction with high accuracy or highprobability using small number of projection data In the caseof extremely low dimension the reconstruction informationis insufficient and the original signal cannot be reconstructedaccurately So the recognition rate is not high When thedimension is increased and the reconstructed informationcan more accurately reconstruct the original signal the

recognition rate will improve When the feature dimension ishigher (ge512) PCA cannot work because of the sample sizeAt 512 dimension the measurement matrix (Deterministicsparse Toeplitz matrices (Δ = 8) are 9521 Deterministicsparse Toeplitz matrices (Δ = 16) are 9491) has reached theperformance of the original dimension (10240 dimension)and the dimension is only 120 of the original dimension Atthe 110 of the original dimension the performance of themeasurement matrix has basically achieved the performanceof the original dimension Random Gaussian matrices are9560 Deterministic sparse Toeplitz matrices (Δ = 4) are9464 Deterministic sparse Toeplitz matrices (Δ = 8) are9456 and Deterministic sparse Toeplitz matrices (Δ = 16)are 9498 In Figure 6(b) 20 samples of each individualwere used as training samples When the feature dimensionle 512 the best performance is PCA which is 9750 (512dimension) At 1024 dimension PCA cannot work andthe performance of the measurement matrix has basicallyachieved the performance of the original dimension RandomGaussian matrices are 9880 Deterministic sparse Toeplitzmatrices (Δ = 4) are 9868 Deterministic sparse Toeplitzmatrices (Δ = 8) are 9901 and Deterministic sparseToeplitzmatrices (Δ = 16) are 9877 In Table 2 we fixed thedimension of each reductionmethods at 512 dimension in thecase of 20 training samples In general the complexity of PCAis119874(1198993) [47] where 119899 is the number of rows of the covariancematrix The example we provided consists of 38 individualsand each individual contains 20 samples Since the numberof Gabor features (10240) for each patch far outweighs thetotal number of training samples (760) the number of rowsof the covariancematrix is determined by the total number oftraining samples and the complexity of PCA is approximately119874(7603) However in the proposed measurement matrixalgorithm the measurement matrix is formed by a certaindeterministic distribution (eg Gaussian distribution) andstructure according to the required dimension and length ofthe signal so that the measurement matrix demonstrates lowcomplexity [36 48 49] The actual speed of each dimensionreduction methods for one patch of all samples (including alltraining samples and all testing samples) is listed Based onthe above analysis we can see that the PCA method has themost outstanding performance but it is limited by the samplesize and it is time-consumingThemeasurement matrix doesnot perform well in low dimension but is not limited by thesample size When the dimension reaches a certain value itsperformance was the same as that of the original dimensionThus the dimension of the data can be reduced without lossof recognition rate

Mathematical Problems in Engineering 9

15

20

25

30

35

40

45

50

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(a) 2 training samples

10

15

20

25

30

35

40

45

50

55

60

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 10001100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(b) 3 training samples

05

10152025303540455055606570

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 10001100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(c) 4 training samples

05

101520253035404550556065707580

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(d) 5 training samples

Figure 7 The performance of each dimension reduction method for GPCRC on CMU PIE database

42 CMU PIE In order to further test the robustness inillumination pose and expression we utilized a currentlypopular database CMU PIE [43 44] a database consistingof 41368 images of 68 people for each person there are 13different poses 43 different illumination conditions and 4different expressions To testify the advantage of our methodin small sample size condition we randomly selected 2 3 4

and 5 face samples from each individual as training sampleand other 40 face samples from each individual as test sam-plesThe face images size is downsampled to 32times32 Figure 6shows some marked samples of the CMU PIE database Theexperimental results of each method are shown in Table 3GPCRC has the best results In Figure 7 we compared therecognition rates of various dimension reduction methods

10 Mathematical Problems in Engineering

Table 3 Recognition rate () on CMU PIE database

Features Method Training sample size2 3 4 5

Intensity feature

SRC 2821 plusmn 543 3936 plusmn 982 4473 plusmn 1529 6054 plusmn 645CRC 3034 plusmn 615 4062 plusmn 920 4435 plusmn 742 6103 plusmn 782SSRC 3752 plusmn 834 5474 plusmn 850 5732 plusmn 982 6673 plusmn 885PCRC 4232 plusmn 117 5646 plusmn 188 6116 plusmn 291 7047 plusmn 387

LBP feature

SRC 3842 plusmn 646 5337 plusmn 710 5367 plusmn 521 6331 plusmn 724CRC 3732 plusmn 775 5483 plusmn 615 5427 plusmn 831 6425 plusmn 690SSRC 4317 plusmn 571 5684 plusmn 921 6453 plusmn 476 6825 plusmn 435PCRC 4121 plusmn 710 5221 plusmn 1176 6254 plusmn 621 7022 plusmn 307

Gabor feature

SRC 4143 plusmn 420 5336 plusmn 540 5358 plusmn 732 6679 plusmn 545CRC 4025 plusmn 515 5535 plusmn 328 5668 plusmn 742 6624 plusmn 618SSRC 4455 plusmn 600 5596 plusmn 620 6250 plusmn 316 7041 plusmn 479PCRC 4857 plusmn 129 5845 plusmn 194 6882 plusmn 256 7318 plusmn 244

for GPCRC Similarly we can see that PCA cannot achievethe best recognition rate because of the sample size limitand at 1024 dimension the performance of the measurementmatrix has basically achieved the performance of the originaldimensionThe recognition rate of eachmeasurementmatrixis shown below

(i) Random Gaussian matrices are 4873 (2 trainingsamples) 5966 (3 training samples) 6886 (4training samples) and 7360 (5 training samples)

(ii) Toeplitz and cyclic matrices are 4889 (2 trainingsamples) 5830 (3 training samples) 6544 (4training samples) and 7312 (5 training samples)

(iii) Deterministic sparse Toeplitz matrices (Δ = 4)are 4763 (2 training samples) 5868 (3 trainingsamples) 6703 (4 training samples) and 7178 (5training samples)

(iv) Deterministic sparse Toeplitz matrices (Δ = 8)are 4869 (2 training samples) 5855 (3 trainingsamples) 6891 (4 training samples) and 7427 (5training samples)

(v) Deterministic sparse Toeplitz matrices (Δ = 16)are 4694 (2 training samples) 6000 (3 trainingsamples) 6625 (4 training samples) and 7169 (5training samples)

Through the above analysis we further validated the feasibil-ity of the measurement matrix in the premise of ensuring therecognition rate

43 LFW In practice the number of training samples is verylimited and only one or two can be obtained from individualidentity documents To simulate the actual situationwe selectthe LFWdatabaseThe LFWdatabase includes frontal imagesof 5749 different subjects in unconstrained environment[45] LFW-a is a version of LFW after alignment using com-mercial face alignment software [46] We chose 158 subjectsfrom LFW-a each subject contains no less than ten samplesFor each subject we randomly choose 1 to 2 samples fromeach individual for training and another 5 samples from each

Figure 8 Some marked samples of the LFW database

individual for testing The face images size is downsampledto 32 times 32 Figure 8 shows some marked samples of the LFWdatabaseThe experimental results of eachmethod are shownin Table 4 it can be clearly seen that the GPCRC achievesthe highest recognition rate performance in all experimentswith the training sample size from 1 to 2 In Figure 9 wecan also see the advantages of the measurement matrix insmall sample size At 1024 dimension the performance ofthe measurement matrix in the single training sample isas follows Random Gaussian matrices are 2618 Toeplitzand cyclic matrices are 2468 Deterministic sparse Toeplitzmatrices (Δ = 4) are 2443 Deterministic sparse Toeplitzmatrices (Δ = 8) are 2585 and Deterministic sparseToeplitz matrices (Δ = 16) are 2528 and the performancesof the measurement matrix in the two training samples areas follows Random Gaussian matrices are 3968 Toeplitzand cyclic matrices are 3772 Deterministic sparse Toeplitzmatrices (Δ = 4) are 3756 Deterministic sparse Toeplitzmatrices (Δ = 8) are 3991 and Deterministic sparseToeplitz matrices (Δ = 16) are 3825

5 Conclusion

In order to alleviate the influence of unreliability environmenton small sample size in FR in this paper we applied improvedmethod for Gabor local features to PCRC we proposedto use the measurement matrices to reduce the dimensionof the transformed signal Several important observationscan be summarized as follows (1) The proposed GPCPCmethod can effectively deal with the influence of unreliabilityenvironment in small sample FR (2) The measurement

Mathematical Problems in Engineering 11

Table 4 Recognition rate () on LFW database

Features Method Training sample size1 2

Intensity feature

SRC 1058 plusmn 720 1902 plusmn 829CRC 908 plusmn 815 2039 plusmn 720SSRC 1703 plusmn 587 2852 plusmn 750PCRC 2229 plusmn 396 3329 plusmn 528

LBP feature

SRC 1552 plusmn 554 2531 plusmn 742CRC 1623 plusmn 621 2622 plusmn 657SSRC 1916 plusmn 341 3210 plusmn 426PCRC 2126 plusmn 473 3159 plusmn 611

Gabor feature

SRC 1868 plusmn 423 3135 plusmn 640CRC 1731 plusmn 815 3083 plusmn 520SSRC 2130 plusmn 721 3367 plusmn 433PCRC 2582 plusmn 342 3960 plusmn 354

0

5

10

15

20

25

30

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(a) Single training sample

0

5

10

15

20

25

30

35

40

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 10001100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(b) 2 training samples

Figure 9 The performance of each dimension reduction method for GPCRC on LFW database

matrix proposed to deal with the high dimension in GPCRCmethod can effectively improve the computational efficiencyand computational speed and they were able to overcome thelimitations of the PCA method

Conflicts of Interest

None of the authors have conflicts of interest

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China (no 61672386) Humanities and Social

Sciences Planning Project of Ministry of Education (no16YJAZH071) Anhui Provincial Natural Science Foundationof China (no 1708085MF142) University Natural ScienceResearch Project of Anhui Province (no KJ2017A259) andProvincial Quality Project of Anhui Province (no 2016zy131)

References

[1] J Galbally S Marcel and J Fierrez ldquoBiometric antispoofingmethods a survey in face recognitionrdquo IEEE Access vol 2 pp1530ndash1552 2014

[2] S Nagpal M Singh R Singh and M Vatsa ldquoRegularizeddeep learning for face recognition with weight variationsrdquo IEEEAccess vol 3 pp 3010ndash3018 2015

12 Mathematical Problems in Engineering

[3] J W Tanaka and D Simonyi ldquoThe parts and wholes of facerecognition a review of the literaturerdquoThe Quarterly Journal ofExperimental Psychology no 10 pp 1ndash37 2016

[4] W Yang Z Wang and C Sun ldquoA collaborative representationbased projectionsmethod for feature extractionrdquoPattern Recog-nition vol 48 no 1 pp 20ndash27 2015

[5] O Boiman E Shechtman and M Irani ldquoIn defense of nearest-neighbor based image classificationrdquo in Proceedings of the 26thIEEE Conference on Computer Vision and Pattern Recognition(CVPR rsquo08) pp 1ndash8 June 2008

[6] K VenkataNarayana V Manoj and K KSwathi ldquoEnhancedface recognition based on PCA and SVMrdquo International Journalof Computer Applications vol 117 no 2 pp 40ndash42 2015

[7] E Owusu and Q R Mao ldquoAn svm cadaboost-based face detec-tion systemrdquo Journal of Experimental amp Theoretical ArtificialIntelligence vol 26 no 4 pp 477ndash491 2014

[8] B V Dasarathy ldquoNearest neighbor (NN) norms NN patternclassification techniquesrdquo Los Alamitos IEEE Computer SocietyPress vol 13 no 100 pp 21ndash27 1991

[9] Y Liu S S Ge C Li and Z You ldquo120581-NS a classifier by thedistance to the nearest subspacerdquo IEEE Transactions on NeuralNetworks and Learning Systems vol 22 no 8 pp 1256ndash12682011

[10] J Hamm and D D Lee ldquoGrassmann discriminant analysis aunifying view on subspace-based learningrdquo in Proceedings ofthe 25th International Conference onMachine Learning pp 376ndash383 July 2008

[11] R Wang S Shan X Chen and W Gao ldquoManifold-manifolddistance with application to face recognition based on imagesetrdquo in Proceedings of the 26th IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo08) pp 1ndash8 June 2008

[12] J Chien and C Wu ldquoDiscriminant waveletfaces and nearestfeature classifiers for face recognitionrdquo IEEE Transactions onPattern Analysis and Machine Intelligence vol 24 no 12 pp1644ndash1649 2002

[13] J Wright A Ganesh Z Zhou A Wagner and Y Ma ldquoDemorobust face recognition via sparse representationrdquo in Proceed-ings of the 8th IEEE International Conference on Automatic Faceamp Gesture Recognition (FG rsquo08) pp 1-2 IEEE Amsterdam TheNetherlands September 2008

[14] W Deng J Hu and J Guo ldquoIn defense of sparsity based facerecognitionrdquo in Proceedings of the 26th IEEE Conference onComputer Vision and Pattern Recognition (CVPR rsquo13) pp 399ndash406 June 2013

[15] L Zhang M Yang and X Feng ldquoSparse representation orcollaborative representation Which helps face recognitionrdquo inProceedings of the IEEE International Conference on ComputerVision (ICCV rsquo11) pp 471ndash478 Barcelona Spain November2011

[16] P Zhu L Zhang Q Hu and S C K Shiu ldquoMulti-scale patchbased collaborative representation for face recognition withmargin distribution optimizationrdquo in European Conference onComputer Vision vol 7572 of Lecture Notes in Computer Sciencepp 822ndash835 2012

[17] D Yang W Chen J Wang and Y Xu ldquoPatch based face recog-nition via fast collaborative representation based classificationand expression insensitive two-stage votingrdquo International Con-ference on Computational Science and Its Applications vol 9787pp 562ndash570 2016

[18] J Lai and X Jiang ldquoClasswise sparse and collaborative patchrepresentation for face recognitionrdquo IEEE Transactions onImage Processing vol 25 no 7 pp 3261ndash3272 2016

[19] M Yang and L Zhang ldquoGabor feature based sparse represen-tation for face recognition with gabor occlusion dictionaryrdquo inProceedings of the 11th European Conference on Computer Vision(ECCV rsquo10) pp 448ndash461 Springer Crete Greece 2010

[20] M R D Rodavia O Bernaldez and M Ballita ldquoWeb andmobile based facial recognition security system using Eigen-faces algorithmrdquo in Proceedings of the 2016 IEEE InternationalConference on Teaching Assessment and Learning for Engineer-ing (TALE rsquo16) pp 86ndash92 Bangkok Thailand December 2016

[21] P N Belhumeur J P Hespanha and D J Kriegman ldquoEigen-faces vs fisherfaces recognition using class specific linearprojectionrdquo IEEE Transactions on Pattern Analysis andMachineIntelligence vol 19 no 7 pp 711ndash720 1997

[22] L P Yang W G Gong W H Li and X H Gu ldquoRandomsampling subspace locality preserving projection for face recog-nitionrdquo Optics amp Precision Engineering vol 16 no 8 pp 1465ndash1470 2008

[23] X Sun M Chen and A Hauptmann ldquoAction recognition vialocal descriptors andholistic featuresrdquo inProceedings of the 2009IEEE Conference on Computer Vision and Pattern Recognition(CVPR rsquo09) pp 58ndash65 June 2009

[24] X Wang Y Zhang X Mu and F S Zhang ldquoThe facerecognition algorithm based on improved lbprdquo InternationalJournal of Optoelectronic Engineering vol 1 no 4 pp 76ndash802012

[25] J Ahmad U Ali and R J Qureshi ldquoFusion of thermal andvisual images for efficient face recognition using gabor filterrdquo inProceedings of the IEEE International Conference on ComputerSystems and Applications pp 135ndash139 March 2006

[26] V Struc R Gajsek and N Pavesic ldquoPrincipal gabor filters forface recognitionrdquo in Proceedings of the IEEE 3rd InternationalConference on Biometrics Theory Applications and Systems(BTAS rsquo09) pp 1ndash6 Washington DC USA September 2009

[27] M Li X Yu K H Ryu S Lee and N Theera-Umpon ldquoFacerecognition technology development with gabor PCA andSVM methodology under illumination normalization condi-tionrdquo Cluster Computing no 3 pp 1ndash10 2017

[28] J Yang D Zhang and A F Frangi ldquoTwo-dimensional PCAa new approach to apperence -based face representationand recognitionrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 26 no 1 pp 131ndash137 2004

[29] H Yu and J Yang ldquoA direct LDA algorithm for high-dimensional data with application to face recognitionrdquo PatternRecognition vol 34 no 10 pp 2067ndash2070 2001

[30] J Kim J Choi J Yi andMTurk ldquoEffective representation usingICA for face recognition robust to local distortion and partialocclusionrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 27 no 12 pp 1977ndash1981 2005

[31] C G Turhan andH S Bilge ldquoClass-wise two-dimensional PCAmethod for face recognitionrdquo IET Computer Vision vol 11 no4 pp 286ndash300 2017

[32] A Mashhoori and M Zolghadri Jahromi ldquoBlock-wise two-directional 2DPCA with ensemble learning for face recogni-tionrdquo Neurocomputing vol 108 pp 111ndash117 2013

[33] M P Rajath Kumar R Keerthi Sravan and K M AishwaryaldquoArtificial neural networks for face recognition using PCA andBPNNrdquo in Proceedings of the 35th IEEE Region 10 Conference(TENCON rsquo15) pp 1ndash6 IEEE Macao China November 2015

[34] E J Candes and T Tao ldquoNear-optimal signal recovery fromrandom projections universal encoding strategiesrdquo Institute ofElectrical and Electronics Engineers Transactions on InformationTheory vol 52 no 12 pp 5406ndash5425 2006

Mathematical Problems in Engineering 13

[35] W U Bajwa J D Haupt G M Raz S J Wright and R DNowak ldquoToeplitz-structured compressed sensing matricesrdquo inProceedings of the IEEESP 14th Workshop on Statistical SignalProcessing (SSP rsquo07) pp 294ndash298 IEEE Madison WI USAAugust 2007

[36] C Zhang H-R Yang and S Wei ldquoCompressive sensing basedon deterministic sparse toeplitz measurement matrices withrandom pitchrdquo Acta Automatica Sinica vol 38 no 8 pp 1362ndash1369 2012

[37] E S Takemura M R Petraglia and A Petraglia ldquoAn imagesuper-resolution algorithm based on wiener filtering andwavelet transformrdquo inProceedings of the 2013 IEEEDigital SignalProcessing and Signal Processing Education Meeting (DSPSPErsquo13) pp 130ndash134 August 2013

[38] M Yang Face Recognition via Sparse Representation WileyEncyclopedia of Electrical and Electronics Engineering 2008

[39] E J Candes J Romberg and T Tao ldquoRobust uncertaintyprinciples exact signal reconstruction from highly incompletefrequency informationrdquo Institute of Electrical and ElectronicsEngineers Transactions on InformationTheory vol 52 no 2 pp489ndash509 2006

[40] A S Georghiades P N Belhumeur and D J Kriegman ldquoFromfew to many illumination cone models for face recognitionunder variable lighting and poserdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 23 no 6 pp 643ndash6602001

[41] N McLaughlin J Ming and D Crookes ldquoLargest matchingareas for illumination and occlusion robust face recognitionrdquoIEEE Transactions on Cybernetics vol 47 no 3 pp 796ndash8082017

[42] Y Xu X Fang J Wu X Li and D Zhang ldquoDiscriminativetransfer subspace learning via low-rank and sparse representa-tionrdquo IEEE Transactions on Image Processing vol 25 no 2 pp850ndash863 2016

[43] H Roy and D Bhattacharjee ldquoLocal-gravity-face (lg-face) forillumination-invariant and heterogeneous face recognitionrdquoIEEE Transactions on Information Forensics and Security vol 11no 7 pp 1412ndash1424 2016

[44] Y Tai J Yang Y Zhang L Luo J Qian and Y ChenldquoFace recognition with pose variations and misalignment viaorthogonal procrustes regressionrdquo IEEE Transactions on ImageProcessing A Publication of the IEEE Signal Processing Societyvol 25 no 6 pp 2673ndash2683 2016

[45] G B Huang M Narayana and E Learned-Miller ldquoLabeledfaces in the wild a database forstudying face recognition inunconstrained environmentsrdquoMonth 2008

[46] LWolf T Hassner and Y Taigman ldquoSimilarity scores based onbackground samplesrdquo in Asian Conference on Computer Visionvol 5995 pp 88ndash97 2009

[47] J Wang Geometric Structure of High-Dimensional Data andDimensionality Reduction Springer 2012

[48] Q Wang J Li and Y Shen ldquoA survey on deterministicmeasurement matrix construction algorithms in compressivesensingrdquo Acta Electronica Sinica vol 41 no 10 pp 2041ndash20502013

[49] WYin SMorgan J Yang andY Zhang ldquoPractical compressivesensing with toeplitz and circulant matricesrdquo in Proceedingsof the Visual Communications and Image Processing 2010 vol7744 July 2010

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 7: SUBMITTED TO MATHEMATICAL PROBLEMS IN …downloads.hindawi.com/journals/mpe/aip/3025264.pdf · SUBMITTED TO MATHEMATICAL PROBLEMS IN ENGINEERING 1 Patch based Collaborative Representation

Mathematical Problems in Engineering 7

Table 1 Recognition rate () on Extended Yale B database

Features Methods Training sample size10 20

Intensity feature

SRC 7615 plusmn 960 8050 plusmn 432CRC 7860 plusmn 1058 8300 plusmn 664SSRC 8753 plusmn 791 9001 plusmn 438PCRC 9044 plusmn 109 9425 plusmn 101

LBP feature

SRC 8356 plusmn 1132 8974 plusmn 952CRC 8302 plusmn 1019 9030 plusmn 327SSRC 9058 plusmn 639 9589 plusmn 301PCRC 7967 plusmn 1257 8725 plusmn 508

Gabor feature

SRC 8525 plusmn 536 9150 plusmn 350CRC 8618 plusmn 443 9000 plusmn 280SSRC 8706 plusmn 949 9224 plusmn 506PCRC 9434 plusmn 206 9883 plusmn 094

Figure 4 Some marked samples of the Extended Yale B database

Feature Dimension

101520253035404550556065707580859095

100

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(a) 10 training samples

Reco

gniti

on ra

te (

)

Feature Dimension

05

101520253035404550556065707580859095

100

0 100 200 300 400 500 600 700 800 900 10001100

Random Gaussian matricesToeplitz and cyclic matrices

PCA

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(b) 20 training samples

Figure 5 The performance of each dimension reduction method for GPCRC on Extended Yale B database

8 Mathematical Problems in Engineering

Table 2 The speed of each dimension reduction methods

Method Random Gaussian Toeplitz and cyclic PCATime(s) 11131 10836 417295Method Deterministic sparse Toeplitz

(Δ = 4) (Δ = 8) (Δ = 16)Time(s) 12643 12226 12678

Figure 6 Some marked samples of the CMU PIE database

various methods with the space dimensions 32 64 128256 512 and 1024 for each patch in GPCRC and thosenumbers correspond to downsampling ratios of 1320 1160180 140 120 and 110 respectively In Table 1 it can beclearly seen that GPCRC achieves the highest recognitionrate performance in all experiments In Figure 6 we cansee the performance of various dimensionality reductionmethods for GPCRC 10 samples of each individual inFigure 5(a) were used as training samples When the featuredimension is low (le256) the best performance of PCA is9280 (256 dimension) which is significantly higher thanthat of the measurement matrix at 256 dimension RandomGaussian matrices are 8877 Toeplitz and cyclic matricesare 8517 Deterministic sparse Toeplitz matrices (Δ = 4)are 8667 Deterministic sparse Toeplitz matrices (Δ = 8)are 8569 and Deterministic sparse Toeplitz matrices (Δ =16) are 8605 but none of these has been as good as theperformance of the original dimension (10240 dimension)So the reason why the performance of PCA is significantlybetter than the measurement matrix at low dimension canbe analyzed through their operation mechanism the PCAtransforms the original data into a set of linearly independentrepresentations of each dimension in a linear transformationwhich can extract the principal component of the dataThese principal components can represent image signalsmore accurately [31ndash33] The theory of compressed sensing[34ndash36] states that when the signal is sparse or compressiblein a transform domain the measurement matrix whichis noncoherent with the transform matrix can be used toproject transform coefficients to the low-dimensional vec-tor and this projection can maintain the information forsignal reconstruction The compressed sensing technologycan achieve the reconstruction with high accuracy or highprobability using small number of projection data In the caseof extremely low dimension the reconstruction informationis insufficient and the original signal cannot be reconstructedaccurately So the recognition rate is not high When thedimension is increased and the reconstructed informationcan more accurately reconstruct the original signal the

recognition rate will improve When the feature dimension ishigher (ge512) PCA cannot work because of the sample sizeAt 512 dimension the measurement matrix (Deterministicsparse Toeplitz matrices (Δ = 8) are 9521 Deterministicsparse Toeplitz matrices (Δ = 16) are 9491) has reached theperformance of the original dimension (10240 dimension)and the dimension is only 120 of the original dimension Atthe 110 of the original dimension the performance of themeasurement matrix has basically achieved the performanceof the original dimension Random Gaussian matrices are9560 Deterministic sparse Toeplitz matrices (Δ = 4) are9464 Deterministic sparse Toeplitz matrices (Δ = 8) are9456 and Deterministic sparse Toeplitz matrices (Δ = 16)are 9498 In Figure 6(b) 20 samples of each individualwere used as training samples When the feature dimensionle 512 the best performance is PCA which is 9750 (512dimension) At 1024 dimension PCA cannot work andthe performance of the measurement matrix has basicallyachieved the performance of the original dimension RandomGaussian matrices are 9880 Deterministic sparse Toeplitzmatrices (Δ = 4) are 9868 Deterministic sparse Toeplitzmatrices (Δ = 8) are 9901 and Deterministic sparseToeplitzmatrices (Δ = 16) are 9877 In Table 2 we fixed thedimension of each reductionmethods at 512 dimension in thecase of 20 training samples In general the complexity of PCAis119874(1198993) [47] where 119899 is the number of rows of the covariancematrix The example we provided consists of 38 individualsand each individual contains 20 samples Since the numberof Gabor features (10240) for each patch far outweighs thetotal number of training samples (760) the number of rowsof the covariancematrix is determined by the total number oftraining samples and the complexity of PCA is approximately119874(7603) However in the proposed measurement matrixalgorithm the measurement matrix is formed by a certaindeterministic distribution (eg Gaussian distribution) andstructure according to the required dimension and length ofthe signal so that the measurement matrix demonstrates lowcomplexity [36 48 49] The actual speed of each dimensionreduction methods for one patch of all samples (including alltraining samples and all testing samples) is listed Based onthe above analysis we can see that the PCA method has themost outstanding performance but it is limited by the samplesize and it is time-consumingThemeasurement matrix doesnot perform well in low dimension but is not limited by thesample size When the dimension reaches a certain value itsperformance was the same as that of the original dimensionThus the dimension of the data can be reduced without lossof recognition rate

Mathematical Problems in Engineering 9

15

20

25

30

35

40

45

50

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(a) 2 training samples

10

15

20

25

30

35

40

45

50

55

60

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 10001100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(b) 3 training samples

05

10152025303540455055606570

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 10001100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(c) 4 training samples

05

101520253035404550556065707580

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(d) 5 training samples

Figure 7 The performance of each dimension reduction method for GPCRC on CMU PIE database

42 CMU PIE In order to further test the robustness inillumination pose and expression we utilized a currentlypopular database CMU PIE [43 44] a database consistingof 41368 images of 68 people for each person there are 13different poses 43 different illumination conditions and 4different expressions To testify the advantage of our methodin small sample size condition we randomly selected 2 3 4

and 5 face samples from each individual as training sampleand other 40 face samples from each individual as test sam-plesThe face images size is downsampled to 32times32 Figure 6shows some marked samples of the CMU PIE database Theexperimental results of each method are shown in Table 3GPCRC has the best results In Figure 7 we compared therecognition rates of various dimension reduction methods

10 Mathematical Problems in Engineering

Table 3 Recognition rate () on CMU PIE database

Features Method Training sample size2 3 4 5

Intensity feature

SRC 2821 plusmn 543 3936 plusmn 982 4473 plusmn 1529 6054 plusmn 645CRC 3034 plusmn 615 4062 plusmn 920 4435 plusmn 742 6103 plusmn 782SSRC 3752 plusmn 834 5474 plusmn 850 5732 plusmn 982 6673 plusmn 885PCRC 4232 plusmn 117 5646 plusmn 188 6116 plusmn 291 7047 plusmn 387

LBP feature

SRC 3842 plusmn 646 5337 plusmn 710 5367 plusmn 521 6331 plusmn 724CRC 3732 plusmn 775 5483 plusmn 615 5427 plusmn 831 6425 plusmn 690SSRC 4317 plusmn 571 5684 plusmn 921 6453 plusmn 476 6825 plusmn 435PCRC 4121 plusmn 710 5221 plusmn 1176 6254 plusmn 621 7022 plusmn 307

Gabor feature

SRC 4143 plusmn 420 5336 plusmn 540 5358 plusmn 732 6679 plusmn 545CRC 4025 plusmn 515 5535 plusmn 328 5668 plusmn 742 6624 plusmn 618SSRC 4455 plusmn 600 5596 plusmn 620 6250 plusmn 316 7041 plusmn 479PCRC 4857 plusmn 129 5845 plusmn 194 6882 plusmn 256 7318 plusmn 244

for GPCRC Similarly we can see that PCA cannot achievethe best recognition rate because of the sample size limitand at 1024 dimension the performance of the measurementmatrix has basically achieved the performance of the originaldimensionThe recognition rate of eachmeasurementmatrixis shown below

(i) Random Gaussian matrices are 4873 (2 trainingsamples) 5966 (3 training samples) 6886 (4training samples) and 7360 (5 training samples)

(ii) Toeplitz and cyclic matrices are 4889 (2 trainingsamples) 5830 (3 training samples) 6544 (4training samples) and 7312 (5 training samples)

(iii) Deterministic sparse Toeplitz matrices (Δ = 4)are 4763 (2 training samples) 5868 (3 trainingsamples) 6703 (4 training samples) and 7178 (5training samples)

(iv) Deterministic sparse Toeplitz matrices (Δ = 8)are 4869 (2 training samples) 5855 (3 trainingsamples) 6891 (4 training samples) and 7427 (5training samples)

(v) Deterministic sparse Toeplitz matrices (Δ = 16)are 4694 (2 training samples) 6000 (3 trainingsamples) 6625 (4 training samples) and 7169 (5training samples)

Through the above analysis we further validated the feasibil-ity of the measurement matrix in the premise of ensuring therecognition rate

43 LFW In practice the number of training samples is verylimited and only one or two can be obtained from individualidentity documents To simulate the actual situationwe selectthe LFWdatabaseThe LFWdatabase includes frontal imagesof 5749 different subjects in unconstrained environment[45] LFW-a is a version of LFW after alignment using com-mercial face alignment software [46] We chose 158 subjectsfrom LFW-a each subject contains no less than ten samplesFor each subject we randomly choose 1 to 2 samples fromeach individual for training and another 5 samples from each

Figure 8 Some marked samples of the LFW database

individual for testing The face images size is downsampledto 32 times 32 Figure 8 shows some marked samples of the LFWdatabaseThe experimental results of eachmethod are shownin Table 4 it can be clearly seen that the GPCRC achievesthe highest recognition rate performance in all experimentswith the training sample size from 1 to 2 In Figure 9 wecan also see the advantages of the measurement matrix insmall sample size At 1024 dimension the performance ofthe measurement matrix in the single training sample isas follows Random Gaussian matrices are 2618 Toeplitzand cyclic matrices are 2468 Deterministic sparse Toeplitzmatrices (Δ = 4) are 2443 Deterministic sparse Toeplitzmatrices (Δ = 8) are 2585 and Deterministic sparseToeplitz matrices (Δ = 16) are 2528 and the performancesof the measurement matrix in the two training samples areas follows Random Gaussian matrices are 3968 Toeplitzand cyclic matrices are 3772 Deterministic sparse Toeplitzmatrices (Δ = 4) are 3756 Deterministic sparse Toeplitzmatrices (Δ = 8) are 3991 and Deterministic sparseToeplitz matrices (Δ = 16) are 3825

5 Conclusion

In order to alleviate the influence of unreliability environmenton small sample size in FR in this paper we applied improvedmethod for Gabor local features to PCRC we proposedto use the measurement matrices to reduce the dimensionof the transformed signal Several important observationscan be summarized as follows (1) The proposed GPCPCmethod can effectively deal with the influence of unreliabilityenvironment in small sample FR (2) The measurement

Mathematical Problems in Engineering 11

Table 4 Recognition rate () on LFW database

Features Method Training sample size1 2

Intensity feature

SRC 1058 plusmn 720 1902 plusmn 829CRC 908 plusmn 815 2039 plusmn 720SSRC 1703 plusmn 587 2852 plusmn 750PCRC 2229 plusmn 396 3329 plusmn 528

LBP feature

SRC 1552 plusmn 554 2531 plusmn 742CRC 1623 plusmn 621 2622 plusmn 657SSRC 1916 plusmn 341 3210 plusmn 426PCRC 2126 plusmn 473 3159 plusmn 611

Gabor feature

SRC 1868 plusmn 423 3135 plusmn 640CRC 1731 plusmn 815 3083 plusmn 520SSRC 2130 plusmn 721 3367 plusmn 433PCRC 2582 plusmn 342 3960 plusmn 354

0

5

10

15

20

25

30

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(a) Single training sample

0

5

10

15

20

25

30

35

40

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 10001100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(b) 2 training samples

Figure 9 The performance of each dimension reduction method for GPCRC on LFW database

matrix proposed to deal with the high dimension in GPCRCmethod can effectively improve the computational efficiencyand computational speed and they were able to overcome thelimitations of the PCA method

Conflicts of Interest

None of the authors have conflicts of interest

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China (no 61672386) Humanities and Social

Sciences Planning Project of Ministry of Education (no16YJAZH071) Anhui Provincial Natural Science Foundationof China (no 1708085MF142) University Natural ScienceResearch Project of Anhui Province (no KJ2017A259) andProvincial Quality Project of Anhui Province (no 2016zy131)

References

[1] J Galbally S Marcel and J Fierrez ldquoBiometric antispoofingmethods a survey in face recognitionrdquo IEEE Access vol 2 pp1530ndash1552 2014

[2] S Nagpal M Singh R Singh and M Vatsa ldquoRegularizeddeep learning for face recognition with weight variationsrdquo IEEEAccess vol 3 pp 3010ndash3018 2015

12 Mathematical Problems in Engineering

[3] J W Tanaka and D Simonyi ldquoThe parts and wholes of facerecognition a review of the literaturerdquoThe Quarterly Journal ofExperimental Psychology no 10 pp 1ndash37 2016

[4] W Yang Z Wang and C Sun ldquoA collaborative representationbased projectionsmethod for feature extractionrdquoPattern Recog-nition vol 48 no 1 pp 20ndash27 2015

[5] O Boiman E Shechtman and M Irani ldquoIn defense of nearest-neighbor based image classificationrdquo in Proceedings of the 26thIEEE Conference on Computer Vision and Pattern Recognition(CVPR rsquo08) pp 1ndash8 June 2008

[6] K VenkataNarayana V Manoj and K KSwathi ldquoEnhancedface recognition based on PCA and SVMrdquo International Journalof Computer Applications vol 117 no 2 pp 40ndash42 2015

[7] E Owusu and Q R Mao ldquoAn svm cadaboost-based face detec-tion systemrdquo Journal of Experimental amp Theoretical ArtificialIntelligence vol 26 no 4 pp 477ndash491 2014

[8] B V Dasarathy ldquoNearest neighbor (NN) norms NN patternclassification techniquesrdquo Los Alamitos IEEE Computer SocietyPress vol 13 no 100 pp 21ndash27 1991

[9] Y Liu S S Ge C Li and Z You ldquo120581-NS a classifier by thedistance to the nearest subspacerdquo IEEE Transactions on NeuralNetworks and Learning Systems vol 22 no 8 pp 1256ndash12682011

[10] J Hamm and D D Lee ldquoGrassmann discriminant analysis aunifying view on subspace-based learningrdquo in Proceedings ofthe 25th International Conference onMachine Learning pp 376ndash383 July 2008

[11] R Wang S Shan X Chen and W Gao ldquoManifold-manifolddistance with application to face recognition based on imagesetrdquo in Proceedings of the 26th IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo08) pp 1ndash8 June 2008

[12] J Chien and C Wu ldquoDiscriminant waveletfaces and nearestfeature classifiers for face recognitionrdquo IEEE Transactions onPattern Analysis and Machine Intelligence vol 24 no 12 pp1644ndash1649 2002

[13] J Wright A Ganesh Z Zhou A Wagner and Y Ma ldquoDemorobust face recognition via sparse representationrdquo in Proceed-ings of the 8th IEEE International Conference on Automatic Faceamp Gesture Recognition (FG rsquo08) pp 1-2 IEEE Amsterdam TheNetherlands September 2008

[14] W Deng J Hu and J Guo ldquoIn defense of sparsity based facerecognitionrdquo in Proceedings of the 26th IEEE Conference onComputer Vision and Pattern Recognition (CVPR rsquo13) pp 399ndash406 June 2013

[15] L Zhang M Yang and X Feng ldquoSparse representation orcollaborative representation Which helps face recognitionrdquo inProceedings of the IEEE International Conference on ComputerVision (ICCV rsquo11) pp 471ndash478 Barcelona Spain November2011

[16] P Zhu L Zhang Q Hu and S C K Shiu ldquoMulti-scale patchbased collaborative representation for face recognition withmargin distribution optimizationrdquo in European Conference onComputer Vision vol 7572 of Lecture Notes in Computer Sciencepp 822ndash835 2012

[17] D Yang W Chen J Wang and Y Xu ldquoPatch based face recog-nition via fast collaborative representation based classificationand expression insensitive two-stage votingrdquo International Con-ference on Computational Science and Its Applications vol 9787pp 562ndash570 2016

[18] J Lai and X Jiang ldquoClasswise sparse and collaborative patchrepresentation for face recognitionrdquo IEEE Transactions onImage Processing vol 25 no 7 pp 3261ndash3272 2016

[19] M Yang and L Zhang ldquoGabor feature based sparse represen-tation for face recognition with gabor occlusion dictionaryrdquo inProceedings of the 11th European Conference on Computer Vision(ECCV rsquo10) pp 448ndash461 Springer Crete Greece 2010

[20] M R D Rodavia O Bernaldez and M Ballita ldquoWeb andmobile based facial recognition security system using Eigen-faces algorithmrdquo in Proceedings of the 2016 IEEE InternationalConference on Teaching Assessment and Learning for Engineer-ing (TALE rsquo16) pp 86ndash92 Bangkok Thailand December 2016

[21] P N Belhumeur J P Hespanha and D J Kriegman ldquoEigen-faces vs fisherfaces recognition using class specific linearprojectionrdquo IEEE Transactions on Pattern Analysis andMachineIntelligence vol 19 no 7 pp 711ndash720 1997

[22] L P Yang W G Gong W H Li and X H Gu ldquoRandomsampling subspace locality preserving projection for face recog-nitionrdquo Optics amp Precision Engineering vol 16 no 8 pp 1465ndash1470 2008

[23] X Sun M Chen and A Hauptmann ldquoAction recognition vialocal descriptors andholistic featuresrdquo inProceedings of the 2009IEEE Conference on Computer Vision and Pattern Recognition(CVPR rsquo09) pp 58ndash65 June 2009

[24] X Wang Y Zhang X Mu and F S Zhang ldquoThe facerecognition algorithm based on improved lbprdquo InternationalJournal of Optoelectronic Engineering vol 1 no 4 pp 76ndash802012

[25] J Ahmad U Ali and R J Qureshi ldquoFusion of thermal andvisual images for efficient face recognition using gabor filterrdquo inProceedings of the IEEE International Conference on ComputerSystems and Applications pp 135ndash139 March 2006

[26] V Struc R Gajsek and N Pavesic ldquoPrincipal gabor filters forface recognitionrdquo in Proceedings of the IEEE 3rd InternationalConference on Biometrics Theory Applications and Systems(BTAS rsquo09) pp 1ndash6 Washington DC USA September 2009

[27] M Li X Yu K H Ryu S Lee and N Theera-Umpon ldquoFacerecognition technology development with gabor PCA andSVM methodology under illumination normalization condi-tionrdquo Cluster Computing no 3 pp 1ndash10 2017

[28] J Yang D Zhang and A F Frangi ldquoTwo-dimensional PCAa new approach to apperence -based face representationand recognitionrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 26 no 1 pp 131ndash137 2004

[29] H Yu and J Yang ldquoA direct LDA algorithm for high-dimensional data with application to face recognitionrdquo PatternRecognition vol 34 no 10 pp 2067ndash2070 2001

[30] J Kim J Choi J Yi andMTurk ldquoEffective representation usingICA for face recognition robust to local distortion and partialocclusionrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 27 no 12 pp 1977ndash1981 2005

[31] C G Turhan andH S Bilge ldquoClass-wise two-dimensional PCAmethod for face recognitionrdquo IET Computer Vision vol 11 no4 pp 286ndash300 2017

[32] A Mashhoori and M Zolghadri Jahromi ldquoBlock-wise two-directional 2DPCA with ensemble learning for face recogni-tionrdquo Neurocomputing vol 108 pp 111ndash117 2013

[33] M P Rajath Kumar R Keerthi Sravan and K M AishwaryaldquoArtificial neural networks for face recognition using PCA andBPNNrdquo in Proceedings of the 35th IEEE Region 10 Conference(TENCON rsquo15) pp 1ndash6 IEEE Macao China November 2015

[34] E J Candes and T Tao ldquoNear-optimal signal recovery fromrandom projections universal encoding strategiesrdquo Institute ofElectrical and Electronics Engineers Transactions on InformationTheory vol 52 no 12 pp 5406ndash5425 2006

Mathematical Problems in Engineering 13

[35] W U Bajwa J D Haupt G M Raz S J Wright and R DNowak ldquoToeplitz-structured compressed sensing matricesrdquo inProceedings of the IEEESP 14th Workshop on Statistical SignalProcessing (SSP rsquo07) pp 294ndash298 IEEE Madison WI USAAugust 2007

[36] C Zhang H-R Yang and S Wei ldquoCompressive sensing basedon deterministic sparse toeplitz measurement matrices withrandom pitchrdquo Acta Automatica Sinica vol 38 no 8 pp 1362ndash1369 2012

[37] E S Takemura M R Petraglia and A Petraglia ldquoAn imagesuper-resolution algorithm based on wiener filtering andwavelet transformrdquo inProceedings of the 2013 IEEEDigital SignalProcessing and Signal Processing Education Meeting (DSPSPErsquo13) pp 130ndash134 August 2013

[38] M Yang Face Recognition via Sparse Representation WileyEncyclopedia of Electrical and Electronics Engineering 2008

[39] E J Candes J Romberg and T Tao ldquoRobust uncertaintyprinciples exact signal reconstruction from highly incompletefrequency informationrdquo Institute of Electrical and ElectronicsEngineers Transactions on InformationTheory vol 52 no 2 pp489ndash509 2006

[40] A S Georghiades P N Belhumeur and D J Kriegman ldquoFromfew to many illumination cone models for face recognitionunder variable lighting and poserdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 23 no 6 pp 643ndash6602001

[41] N McLaughlin J Ming and D Crookes ldquoLargest matchingareas for illumination and occlusion robust face recognitionrdquoIEEE Transactions on Cybernetics vol 47 no 3 pp 796ndash8082017

[42] Y Xu X Fang J Wu X Li and D Zhang ldquoDiscriminativetransfer subspace learning via low-rank and sparse representa-tionrdquo IEEE Transactions on Image Processing vol 25 no 2 pp850ndash863 2016

[43] H Roy and D Bhattacharjee ldquoLocal-gravity-face (lg-face) forillumination-invariant and heterogeneous face recognitionrdquoIEEE Transactions on Information Forensics and Security vol 11no 7 pp 1412ndash1424 2016

[44] Y Tai J Yang Y Zhang L Luo J Qian and Y ChenldquoFace recognition with pose variations and misalignment viaorthogonal procrustes regressionrdquo IEEE Transactions on ImageProcessing A Publication of the IEEE Signal Processing Societyvol 25 no 6 pp 2673ndash2683 2016

[45] G B Huang M Narayana and E Learned-Miller ldquoLabeledfaces in the wild a database forstudying face recognition inunconstrained environmentsrdquoMonth 2008

[46] LWolf T Hassner and Y Taigman ldquoSimilarity scores based onbackground samplesrdquo in Asian Conference on Computer Visionvol 5995 pp 88ndash97 2009

[47] J Wang Geometric Structure of High-Dimensional Data andDimensionality Reduction Springer 2012

[48] Q Wang J Li and Y Shen ldquoA survey on deterministicmeasurement matrix construction algorithms in compressivesensingrdquo Acta Electronica Sinica vol 41 no 10 pp 2041ndash20502013

[49] WYin SMorgan J Yang andY Zhang ldquoPractical compressivesensing with toeplitz and circulant matricesrdquo in Proceedingsof the Visual Communications and Image Processing 2010 vol7744 July 2010

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 8: SUBMITTED TO MATHEMATICAL PROBLEMS IN …downloads.hindawi.com/journals/mpe/aip/3025264.pdf · SUBMITTED TO MATHEMATICAL PROBLEMS IN ENGINEERING 1 Patch based Collaborative Representation

8 Mathematical Problems in Engineering

Table 2 The speed of each dimension reduction methods

Method Random Gaussian Toeplitz and cyclic PCATime(s) 11131 10836 417295Method Deterministic sparse Toeplitz

(Δ = 4) (Δ = 8) (Δ = 16)Time(s) 12643 12226 12678

Figure 6 Some marked samples of the CMU PIE database

various methods with the space dimensions 32 64 128256 512 and 1024 for each patch in GPCRC and thosenumbers correspond to downsampling ratios of 1320 1160180 140 120 and 110 respectively In Table 1 it can beclearly seen that GPCRC achieves the highest recognitionrate performance in all experiments In Figure 6 we cansee the performance of various dimensionality reductionmethods for GPCRC 10 samples of each individual inFigure 5(a) were used as training samples When the featuredimension is low (le256) the best performance of PCA is9280 (256 dimension) which is significantly higher thanthat of the measurement matrix at 256 dimension RandomGaussian matrices are 8877 Toeplitz and cyclic matricesare 8517 Deterministic sparse Toeplitz matrices (Δ = 4)are 8667 Deterministic sparse Toeplitz matrices (Δ = 8)are 8569 and Deterministic sparse Toeplitz matrices (Δ =16) are 8605 but none of these has been as good as theperformance of the original dimension (10240 dimension)So the reason why the performance of PCA is significantlybetter than the measurement matrix at low dimension canbe analyzed through their operation mechanism the PCAtransforms the original data into a set of linearly independentrepresentations of each dimension in a linear transformationwhich can extract the principal component of the dataThese principal components can represent image signalsmore accurately [31ndash33] The theory of compressed sensing[34ndash36] states that when the signal is sparse or compressiblein a transform domain the measurement matrix whichis noncoherent with the transform matrix can be used toproject transform coefficients to the low-dimensional vec-tor and this projection can maintain the information forsignal reconstruction The compressed sensing technologycan achieve the reconstruction with high accuracy or highprobability using small number of projection data In the caseof extremely low dimension the reconstruction informationis insufficient and the original signal cannot be reconstructedaccurately So the recognition rate is not high When thedimension is increased and the reconstructed informationcan more accurately reconstruct the original signal the

recognition rate will improve When the feature dimension ishigher (ge512) PCA cannot work because of the sample sizeAt 512 dimension the measurement matrix (Deterministicsparse Toeplitz matrices (Δ = 8) are 9521 Deterministicsparse Toeplitz matrices (Δ = 16) are 9491) has reached theperformance of the original dimension (10240 dimension)and the dimension is only 120 of the original dimension Atthe 110 of the original dimension the performance of themeasurement matrix has basically achieved the performanceof the original dimension Random Gaussian matrices are9560 Deterministic sparse Toeplitz matrices (Δ = 4) are9464 Deterministic sparse Toeplitz matrices (Δ = 8) are9456 and Deterministic sparse Toeplitz matrices (Δ = 16)are 9498 In Figure 6(b) 20 samples of each individualwere used as training samples When the feature dimensionle 512 the best performance is PCA which is 9750 (512dimension) At 1024 dimension PCA cannot work andthe performance of the measurement matrix has basicallyachieved the performance of the original dimension RandomGaussian matrices are 9880 Deterministic sparse Toeplitzmatrices (Δ = 4) are 9868 Deterministic sparse Toeplitzmatrices (Δ = 8) are 9901 and Deterministic sparseToeplitzmatrices (Δ = 16) are 9877 In Table 2 we fixed thedimension of each reductionmethods at 512 dimension in thecase of 20 training samples In general the complexity of PCAis119874(1198993) [47] where 119899 is the number of rows of the covariancematrix The example we provided consists of 38 individualsand each individual contains 20 samples Since the numberof Gabor features (10240) for each patch far outweighs thetotal number of training samples (760) the number of rowsof the covariancematrix is determined by the total number oftraining samples and the complexity of PCA is approximately119874(7603) However in the proposed measurement matrixalgorithm the measurement matrix is formed by a certaindeterministic distribution (eg Gaussian distribution) andstructure according to the required dimension and length ofthe signal so that the measurement matrix demonstrates lowcomplexity [36 48 49] The actual speed of each dimensionreduction methods for one patch of all samples (including alltraining samples and all testing samples) is listed Based onthe above analysis we can see that the PCA method has themost outstanding performance but it is limited by the samplesize and it is time-consumingThemeasurement matrix doesnot perform well in low dimension but is not limited by thesample size When the dimension reaches a certain value itsperformance was the same as that of the original dimensionThus the dimension of the data can be reduced without lossof recognition rate

Mathematical Problems in Engineering 9

15

20

25

30

35

40

45

50

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(a) 2 training samples

10

15

20

25

30

35

40

45

50

55

60

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 10001100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(b) 3 training samples

05

10152025303540455055606570

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 10001100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(c) 4 training samples

05

101520253035404550556065707580

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(d) 5 training samples

Figure 7 The performance of each dimension reduction method for GPCRC on CMU PIE database

42 CMU PIE In order to further test the robustness inillumination pose and expression we utilized a currentlypopular database CMU PIE [43 44] a database consistingof 41368 images of 68 people for each person there are 13different poses 43 different illumination conditions and 4different expressions To testify the advantage of our methodin small sample size condition we randomly selected 2 3 4

and 5 face samples from each individual as training sampleand other 40 face samples from each individual as test sam-plesThe face images size is downsampled to 32times32 Figure 6shows some marked samples of the CMU PIE database Theexperimental results of each method are shown in Table 3GPCRC has the best results In Figure 7 we compared therecognition rates of various dimension reduction methods

10 Mathematical Problems in Engineering

Table 3 Recognition rate () on CMU PIE database

Features Method Training sample size2 3 4 5

Intensity feature

SRC 2821 plusmn 543 3936 plusmn 982 4473 plusmn 1529 6054 plusmn 645CRC 3034 plusmn 615 4062 plusmn 920 4435 plusmn 742 6103 plusmn 782SSRC 3752 plusmn 834 5474 plusmn 850 5732 plusmn 982 6673 plusmn 885PCRC 4232 plusmn 117 5646 plusmn 188 6116 plusmn 291 7047 plusmn 387

LBP feature

SRC 3842 plusmn 646 5337 plusmn 710 5367 plusmn 521 6331 plusmn 724CRC 3732 plusmn 775 5483 plusmn 615 5427 plusmn 831 6425 plusmn 690SSRC 4317 plusmn 571 5684 plusmn 921 6453 plusmn 476 6825 plusmn 435PCRC 4121 plusmn 710 5221 plusmn 1176 6254 plusmn 621 7022 plusmn 307

Gabor feature

SRC 4143 plusmn 420 5336 plusmn 540 5358 plusmn 732 6679 plusmn 545CRC 4025 plusmn 515 5535 plusmn 328 5668 plusmn 742 6624 plusmn 618SSRC 4455 plusmn 600 5596 plusmn 620 6250 plusmn 316 7041 plusmn 479PCRC 4857 plusmn 129 5845 plusmn 194 6882 plusmn 256 7318 plusmn 244

for GPCRC Similarly we can see that PCA cannot achievethe best recognition rate because of the sample size limitand at 1024 dimension the performance of the measurementmatrix has basically achieved the performance of the originaldimensionThe recognition rate of eachmeasurementmatrixis shown below

(i) Random Gaussian matrices are 4873 (2 trainingsamples) 5966 (3 training samples) 6886 (4training samples) and 7360 (5 training samples)

(ii) Toeplitz and cyclic matrices are 4889 (2 trainingsamples) 5830 (3 training samples) 6544 (4training samples) and 7312 (5 training samples)

(iii) Deterministic sparse Toeplitz matrices (Δ = 4)are 4763 (2 training samples) 5868 (3 trainingsamples) 6703 (4 training samples) and 7178 (5training samples)

(iv) Deterministic sparse Toeplitz matrices (Δ = 8)are 4869 (2 training samples) 5855 (3 trainingsamples) 6891 (4 training samples) and 7427 (5training samples)

(v) Deterministic sparse Toeplitz matrices (Δ = 16)are 4694 (2 training samples) 6000 (3 trainingsamples) 6625 (4 training samples) and 7169 (5training samples)

Through the above analysis we further validated the feasibil-ity of the measurement matrix in the premise of ensuring therecognition rate

43 LFW In practice the number of training samples is verylimited and only one or two can be obtained from individualidentity documents To simulate the actual situationwe selectthe LFWdatabaseThe LFWdatabase includes frontal imagesof 5749 different subjects in unconstrained environment[45] LFW-a is a version of LFW after alignment using com-mercial face alignment software [46] We chose 158 subjectsfrom LFW-a each subject contains no less than ten samplesFor each subject we randomly choose 1 to 2 samples fromeach individual for training and another 5 samples from each

Figure 8 Some marked samples of the LFW database

individual for testing The face images size is downsampledto 32 times 32 Figure 8 shows some marked samples of the LFWdatabaseThe experimental results of eachmethod are shownin Table 4 it can be clearly seen that the GPCRC achievesthe highest recognition rate performance in all experimentswith the training sample size from 1 to 2 In Figure 9 wecan also see the advantages of the measurement matrix insmall sample size At 1024 dimension the performance ofthe measurement matrix in the single training sample isas follows Random Gaussian matrices are 2618 Toeplitzand cyclic matrices are 2468 Deterministic sparse Toeplitzmatrices (Δ = 4) are 2443 Deterministic sparse Toeplitzmatrices (Δ = 8) are 2585 and Deterministic sparseToeplitz matrices (Δ = 16) are 2528 and the performancesof the measurement matrix in the two training samples areas follows Random Gaussian matrices are 3968 Toeplitzand cyclic matrices are 3772 Deterministic sparse Toeplitzmatrices (Δ = 4) are 3756 Deterministic sparse Toeplitzmatrices (Δ = 8) are 3991 and Deterministic sparseToeplitz matrices (Δ = 16) are 3825

5 Conclusion

In order to alleviate the influence of unreliability environmenton small sample size in FR in this paper we applied improvedmethod for Gabor local features to PCRC we proposedto use the measurement matrices to reduce the dimensionof the transformed signal Several important observationscan be summarized as follows (1) The proposed GPCPCmethod can effectively deal with the influence of unreliabilityenvironment in small sample FR (2) The measurement

Mathematical Problems in Engineering 11

Table 4 Recognition rate () on LFW database

Features Method Training sample size1 2

Intensity feature

SRC 1058 plusmn 720 1902 plusmn 829CRC 908 plusmn 815 2039 plusmn 720SSRC 1703 plusmn 587 2852 plusmn 750PCRC 2229 plusmn 396 3329 plusmn 528

LBP feature

SRC 1552 plusmn 554 2531 plusmn 742CRC 1623 plusmn 621 2622 plusmn 657SSRC 1916 plusmn 341 3210 plusmn 426PCRC 2126 plusmn 473 3159 plusmn 611

Gabor feature

SRC 1868 plusmn 423 3135 plusmn 640CRC 1731 plusmn 815 3083 plusmn 520SSRC 2130 plusmn 721 3367 plusmn 433PCRC 2582 plusmn 342 3960 plusmn 354

0

5

10

15

20

25

30

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(a) Single training sample

0

5

10

15

20

25

30

35

40

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 10001100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(b) 2 training samples

Figure 9 The performance of each dimension reduction method for GPCRC on LFW database

matrix proposed to deal with the high dimension in GPCRCmethod can effectively improve the computational efficiencyand computational speed and they were able to overcome thelimitations of the PCA method

Conflicts of Interest

None of the authors have conflicts of interest

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China (no 61672386) Humanities and Social

Sciences Planning Project of Ministry of Education (no16YJAZH071) Anhui Provincial Natural Science Foundationof China (no 1708085MF142) University Natural ScienceResearch Project of Anhui Province (no KJ2017A259) andProvincial Quality Project of Anhui Province (no 2016zy131)

References

[1] J Galbally S Marcel and J Fierrez ldquoBiometric antispoofingmethods a survey in face recognitionrdquo IEEE Access vol 2 pp1530ndash1552 2014

[2] S Nagpal M Singh R Singh and M Vatsa ldquoRegularizeddeep learning for face recognition with weight variationsrdquo IEEEAccess vol 3 pp 3010ndash3018 2015

12 Mathematical Problems in Engineering

[3] J W Tanaka and D Simonyi ldquoThe parts and wholes of facerecognition a review of the literaturerdquoThe Quarterly Journal ofExperimental Psychology no 10 pp 1ndash37 2016

[4] W Yang Z Wang and C Sun ldquoA collaborative representationbased projectionsmethod for feature extractionrdquoPattern Recog-nition vol 48 no 1 pp 20ndash27 2015

[5] O Boiman E Shechtman and M Irani ldquoIn defense of nearest-neighbor based image classificationrdquo in Proceedings of the 26thIEEE Conference on Computer Vision and Pattern Recognition(CVPR rsquo08) pp 1ndash8 June 2008

[6] K VenkataNarayana V Manoj and K KSwathi ldquoEnhancedface recognition based on PCA and SVMrdquo International Journalof Computer Applications vol 117 no 2 pp 40ndash42 2015

[7] E Owusu and Q R Mao ldquoAn svm cadaboost-based face detec-tion systemrdquo Journal of Experimental amp Theoretical ArtificialIntelligence vol 26 no 4 pp 477ndash491 2014

[8] B V Dasarathy ldquoNearest neighbor (NN) norms NN patternclassification techniquesrdquo Los Alamitos IEEE Computer SocietyPress vol 13 no 100 pp 21ndash27 1991

[9] Y Liu S S Ge C Li and Z You ldquo120581-NS a classifier by thedistance to the nearest subspacerdquo IEEE Transactions on NeuralNetworks and Learning Systems vol 22 no 8 pp 1256ndash12682011

[10] J Hamm and D D Lee ldquoGrassmann discriminant analysis aunifying view on subspace-based learningrdquo in Proceedings ofthe 25th International Conference onMachine Learning pp 376ndash383 July 2008

[11] R Wang S Shan X Chen and W Gao ldquoManifold-manifolddistance with application to face recognition based on imagesetrdquo in Proceedings of the 26th IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo08) pp 1ndash8 June 2008

[12] J Chien and C Wu ldquoDiscriminant waveletfaces and nearestfeature classifiers for face recognitionrdquo IEEE Transactions onPattern Analysis and Machine Intelligence vol 24 no 12 pp1644ndash1649 2002

[13] J Wright A Ganesh Z Zhou A Wagner and Y Ma ldquoDemorobust face recognition via sparse representationrdquo in Proceed-ings of the 8th IEEE International Conference on Automatic Faceamp Gesture Recognition (FG rsquo08) pp 1-2 IEEE Amsterdam TheNetherlands September 2008

[14] W Deng J Hu and J Guo ldquoIn defense of sparsity based facerecognitionrdquo in Proceedings of the 26th IEEE Conference onComputer Vision and Pattern Recognition (CVPR rsquo13) pp 399ndash406 June 2013

[15] L Zhang M Yang and X Feng ldquoSparse representation orcollaborative representation Which helps face recognitionrdquo inProceedings of the IEEE International Conference on ComputerVision (ICCV rsquo11) pp 471ndash478 Barcelona Spain November2011

[16] P Zhu L Zhang Q Hu and S C K Shiu ldquoMulti-scale patchbased collaborative representation for face recognition withmargin distribution optimizationrdquo in European Conference onComputer Vision vol 7572 of Lecture Notes in Computer Sciencepp 822ndash835 2012

[17] D Yang W Chen J Wang and Y Xu ldquoPatch based face recog-nition via fast collaborative representation based classificationand expression insensitive two-stage votingrdquo International Con-ference on Computational Science and Its Applications vol 9787pp 562ndash570 2016

[18] J Lai and X Jiang ldquoClasswise sparse and collaborative patchrepresentation for face recognitionrdquo IEEE Transactions onImage Processing vol 25 no 7 pp 3261ndash3272 2016

[19] M Yang and L Zhang ldquoGabor feature based sparse represen-tation for face recognition with gabor occlusion dictionaryrdquo inProceedings of the 11th European Conference on Computer Vision(ECCV rsquo10) pp 448ndash461 Springer Crete Greece 2010

[20] M R D Rodavia O Bernaldez and M Ballita ldquoWeb andmobile based facial recognition security system using Eigen-faces algorithmrdquo in Proceedings of the 2016 IEEE InternationalConference on Teaching Assessment and Learning for Engineer-ing (TALE rsquo16) pp 86ndash92 Bangkok Thailand December 2016

[21] P N Belhumeur J P Hespanha and D J Kriegman ldquoEigen-faces vs fisherfaces recognition using class specific linearprojectionrdquo IEEE Transactions on Pattern Analysis andMachineIntelligence vol 19 no 7 pp 711ndash720 1997

[22] L P Yang W G Gong W H Li and X H Gu ldquoRandomsampling subspace locality preserving projection for face recog-nitionrdquo Optics amp Precision Engineering vol 16 no 8 pp 1465ndash1470 2008

[23] X Sun M Chen and A Hauptmann ldquoAction recognition vialocal descriptors andholistic featuresrdquo inProceedings of the 2009IEEE Conference on Computer Vision and Pattern Recognition(CVPR rsquo09) pp 58ndash65 June 2009

[24] X Wang Y Zhang X Mu and F S Zhang ldquoThe facerecognition algorithm based on improved lbprdquo InternationalJournal of Optoelectronic Engineering vol 1 no 4 pp 76ndash802012

[25] J Ahmad U Ali and R J Qureshi ldquoFusion of thermal andvisual images for efficient face recognition using gabor filterrdquo inProceedings of the IEEE International Conference on ComputerSystems and Applications pp 135ndash139 March 2006

[26] V Struc R Gajsek and N Pavesic ldquoPrincipal gabor filters forface recognitionrdquo in Proceedings of the IEEE 3rd InternationalConference on Biometrics Theory Applications and Systems(BTAS rsquo09) pp 1ndash6 Washington DC USA September 2009

[27] M Li X Yu K H Ryu S Lee and N Theera-Umpon ldquoFacerecognition technology development with gabor PCA andSVM methodology under illumination normalization condi-tionrdquo Cluster Computing no 3 pp 1ndash10 2017

[28] J Yang D Zhang and A F Frangi ldquoTwo-dimensional PCAa new approach to apperence -based face representationand recognitionrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 26 no 1 pp 131ndash137 2004

[29] H Yu and J Yang ldquoA direct LDA algorithm for high-dimensional data with application to face recognitionrdquo PatternRecognition vol 34 no 10 pp 2067ndash2070 2001

[30] J Kim J Choi J Yi andMTurk ldquoEffective representation usingICA for face recognition robust to local distortion and partialocclusionrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 27 no 12 pp 1977ndash1981 2005

[31] C G Turhan andH S Bilge ldquoClass-wise two-dimensional PCAmethod for face recognitionrdquo IET Computer Vision vol 11 no4 pp 286ndash300 2017

[32] A Mashhoori and M Zolghadri Jahromi ldquoBlock-wise two-directional 2DPCA with ensemble learning for face recogni-tionrdquo Neurocomputing vol 108 pp 111ndash117 2013

[33] M P Rajath Kumar R Keerthi Sravan and K M AishwaryaldquoArtificial neural networks for face recognition using PCA andBPNNrdquo in Proceedings of the 35th IEEE Region 10 Conference(TENCON rsquo15) pp 1ndash6 IEEE Macao China November 2015

[34] E J Candes and T Tao ldquoNear-optimal signal recovery fromrandom projections universal encoding strategiesrdquo Institute ofElectrical and Electronics Engineers Transactions on InformationTheory vol 52 no 12 pp 5406ndash5425 2006

Mathematical Problems in Engineering 13

[35] W U Bajwa J D Haupt G M Raz S J Wright and R DNowak ldquoToeplitz-structured compressed sensing matricesrdquo inProceedings of the IEEESP 14th Workshop on Statistical SignalProcessing (SSP rsquo07) pp 294ndash298 IEEE Madison WI USAAugust 2007

[36] C Zhang H-R Yang and S Wei ldquoCompressive sensing basedon deterministic sparse toeplitz measurement matrices withrandom pitchrdquo Acta Automatica Sinica vol 38 no 8 pp 1362ndash1369 2012

[37] E S Takemura M R Petraglia and A Petraglia ldquoAn imagesuper-resolution algorithm based on wiener filtering andwavelet transformrdquo inProceedings of the 2013 IEEEDigital SignalProcessing and Signal Processing Education Meeting (DSPSPErsquo13) pp 130ndash134 August 2013

[38] M Yang Face Recognition via Sparse Representation WileyEncyclopedia of Electrical and Electronics Engineering 2008

[39] E J Candes J Romberg and T Tao ldquoRobust uncertaintyprinciples exact signal reconstruction from highly incompletefrequency informationrdquo Institute of Electrical and ElectronicsEngineers Transactions on InformationTheory vol 52 no 2 pp489ndash509 2006

[40] A S Georghiades P N Belhumeur and D J Kriegman ldquoFromfew to many illumination cone models for face recognitionunder variable lighting and poserdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 23 no 6 pp 643ndash6602001

[41] N McLaughlin J Ming and D Crookes ldquoLargest matchingareas for illumination and occlusion robust face recognitionrdquoIEEE Transactions on Cybernetics vol 47 no 3 pp 796ndash8082017

[42] Y Xu X Fang J Wu X Li and D Zhang ldquoDiscriminativetransfer subspace learning via low-rank and sparse representa-tionrdquo IEEE Transactions on Image Processing vol 25 no 2 pp850ndash863 2016

[43] H Roy and D Bhattacharjee ldquoLocal-gravity-face (lg-face) forillumination-invariant and heterogeneous face recognitionrdquoIEEE Transactions on Information Forensics and Security vol 11no 7 pp 1412ndash1424 2016

[44] Y Tai J Yang Y Zhang L Luo J Qian and Y ChenldquoFace recognition with pose variations and misalignment viaorthogonal procrustes regressionrdquo IEEE Transactions on ImageProcessing A Publication of the IEEE Signal Processing Societyvol 25 no 6 pp 2673ndash2683 2016

[45] G B Huang M Narayana and E Learned-Miller ldquoLabeledfaces in the wild a database forstudying face recognition inunconstrained environmentsrdquoMonth 2008

[46] LWolf T Hassner and Y Taigman ldquoSimilarity scores based onbackground samplesrdquo in Asian Conference on Computer Visionvol 5995 pp 88ndash97 2009

[47] J Wang Geometric Structure of High-Dimensional Data andDimensionality Reduction Springer 2012

[48] Q Wang J Li and Y Shen ldquoA survey on deterministicmeasurement matrix construction algorithms in compressivesensingrdquo Acta Electronica Sinica vol 41 no 10 pp 2041ndash20502013

[49] WYin SMorgan J Yang andY Zhang ldquoPractical compressivesensing with toeplitz and circulant matricesrdquo in Proceedingsof the Visual Communications and Image Processing 2010 vol7744 July 2010

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 9: SUBMITTED TO MATHEMATICAL PROBLEMS IN …downloads.hindawi.com/journals/mpe/aip/3025264.pdf · SUBMITTED TO MATHEMATICAL PROBLEMS IN ENGINEERING 1 Patch based Collaborative Representation

Mathematical Problems in Engineering 9

15

20

25

30

35

40

45

50

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(a) 2 training samples

10

15

20

25

30

35

40

45

50

55

60

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 10001100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(b) 3 training samples

05

10152025303540455055606570

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 10001100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(c) 4 training samples

05

101520253035404550556065707580

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(d) 5 training samples

Figure 7 The performance of each dimension reduction method for GPCRC on CMU PIE database

42 CMU PIE In order to further test the robustness inillumination pose and expression we utilized a currentlypopular database CMU PIE [43 44] a database consistingof 41368 images of 68 people for each person there are 13different poses 43 different illumination conditions and 4different expressions To testify the advantage of our methodin small sample size condition we randomly selected 2 3 4

and 5 face samples from each individual as training sampleand other 40 face samples from each individual as test sam-plesThe face images size is downsampled to 32times32 Figure 6shows some marked samples of the CMU PIE database Theexperimental results of each method are shown in Table 3GPCRC has the best results In Figure 7 we compared therecognition rates of various dimension reduction methods

10 Mathematical Problems in Engineering

Table 3 Recognition rate () on CMU PIE database

Features Method Training sample size2 3 4 5

Intensity feature

SRC 2821 plusmn 543 3936 plusmn 982 4473 plusmn 1529 6054 plusmn 645CRC 3034 plusmn 615 4062 plusmn 920 4435 plusmn 742 6103 plusmn 782SSRC 3752 plusmn 834 5474 plusmn 850 5732 plusmn 982 6673 plusmn 885PCRC 4232 plusmn 117 5646 plusmn 188 6116 plusmn 291 7047 plusmn 387

LBP feature

SRC 3842 plusmn 646 5337 plusmn 710 5367 plusmn 521 6331 plusmn 724CRC 3732 plusmn 775 5483 plusmn 615 5427 plusmn 831 6425 plusmn 690SSRC 4317 plusmn 571 5684 plusmn 921 6453 plusmn 476 6825 plusmn 435PCRC 4121 plusmn 710 5221 plusmn 1176 6254 plusmn 621 7022 plusmn 307

Gabor feature

SRC 4143 plusmn 420 5336 plusmn 540 5358 plusmn 732 6679 plusmn 545CRC 4025 plusmn 515 5535 plusmn 328 5668 plusmn 742 6624 plusmn 618SSRC 4455 plusmn 600 5596 plusmn 620 6250 plusmn 316 7041 plusmn 479PCRC 4857 plusmn 129 5845 plusmn 194 6882 plusmn 256 7318 plusmn 244

for GPCRC Similarly we can see that PCA cannot achievethe best recognition rate because of the sample size limitand at 1024 dimension the performance of the measurementmatrix has basically achieved the performance of the originaldimensionThe recognition rate of eachmeasurementmatrixis shown below

(i) Random Gaussian matrices are 4873 (2 trainingsamples) 5966 (3 training samples) 6886 (4training samples) and 7360 (5 training samples)

(ii) Toeplitz and cyclic matrices are 4889 (2 trainingsamples) 5830 (3 training samples) 6544 (4training samples) and 7312 (5 training samples)

(iii) Deterministic sparse Toeplitz matrices (Δ = 4)are 4763 (2 training samples) 5868 (3 trainingsamples) 6703 (4 training samples) and 7178 (5training samples)

(iv) Deterministic sparse Toeplitz matrices (Δ = 8)are 4869 (2 training samples) 5855 (3 trainingsamples) 6891 (4 training samples) and 7427 (5training samples)

(v) Deterministic sparse Toeplitz matrices (Δ = 16)are 4694 (2 training samples) 6000 (3 trainingsamples) 6625 (4 training samples) and 7169 (5training samples)

Through the above analysis we further validated the feasibil-ity of the measurement matrix in the premise of ensuring therecognition rate

43 LFW In practice the number of training samples is verylimited and only one or two can be obtained from individualidentity documents To simulate the actual situationwe selectthe LFWdatabaseThe LFWdatabase includes frontal imagesof 5749 different subjects in unconstrained environment[45] LFW-a is a version of LFW after alignment using com-mercial face alignment software [46] We chose 158 subjectsfrom LFW-a each subject contains no less than ten samplesFor each subject we randomly choose 1 to 2 samples fromeach individual for training and another 5 samples from each

Figure 8 Some marked samples of the LFW database

individual for testing The face images size is downsampledto 32 times 32 Figure 8 shows some marked samples of the LFWdatabaseThe experimental results of eachmethod are shownin Table 4 it can be clearly seen that the GPCRC achievesthe highest recognition rate performance in all experimentswith the training sample size from 1 to 2 In Figure 9 wecan also see the advantages of the measurement matrix insmall sample size At 1024 dimension the performance ofthe measurement matrix in the single training sample isas follows Random Gaussian matrices are 2618 Toeplitzand cyclic matrices are 2468 Deterministic sparse Toeplitzmatrices (Δ = 4) are 2443 Deterministic sparse Toeplitzmatrices (Δ = 8) are 2585 and Deterministic sparseToeplitz matrices (Δ = 16) are 2528 and the performancesof the measurement matrix in the two training samples areas follows Random Gaussian matrices are 3968 Toeplitzand cyclic matrices are 3772 Deterministic sparse Toeplitzmatrices (Δ = 4) are 3756 Deterministic sparse Toeplitzmatrices (Δ = 8) are 3991 and Deterministic sparseToeplitz matrices (Δ = 16) are 3825

5 Conclusion

In order to alleviate the influence of unreliability environmenton small sample size in FR in this paper we applied improvedmethod for Gabor local features to PCRC we proposedto use the measurement matrices to reduce the dimensionof the transformed signal Several important observationscan be summarized as follows (1) The proposed GPCPCmethod can effectively deal with the influence of unreliabilityenvironment in small sample FR (2) The measurement

Mathematical Problems in Engineering 11

Table 4 Recognition rate () on LFW database

Features Method Training sample size1 2

Intensity feature

SRC 1058 plusmn 720 1902 plusmn 829CRC 908 plusmn 815 2039 plusmn 720SSRC 1703 plusmn 587 2852 plusmn 750PCRC 2229 plusmn 396 3329 plusmn 528

LBP feature

SRC 1552 plusmn 554 2531 plusmn 742CRC 1623 plusmn 621 2622 plusmn 657SSRC 1916 plusmn 341 3210 plusmn 426PCRC 2126 plusmn 473 3159 plusmn 611

Gabor feature

SRC 1868 plusmn 423 3135 plusmn 640CRC 1731 plusmn 815 3083 plusmn 520SSRC 2130 plusmn 721 3367 plusmn 433PCRC 2582 plusmn 342 3960 plusmn 354

0

5

10

15

20

25

30

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(a) Single training sample

0

5

10

15

20

25

30

35

40

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 10001100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(b) 2 training samples

Figure 9 The performance of each dimension reduction method for GPCRC on LFW database

matrix proposed to deal with the high dimension in GPCRCmethod can effectively improve the computational efficiencyand computational speed and they were able to overcome thelimitations of the PCA method

Conflicts of Interest

None of the authors have conflicts of interest

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China (no 61672386) Humanities and Social

Sciences Planning Project of Ministry of Education (no16YJAZH071) Anhui Provincial Natural Science Foundationof China (no 1708085MF142) University Natural ScienceResearch Project of Anhui Province (no KJ2017A259) andProvincial Quality Project of Anhui Province (no 2016zy131)

References

[1] J Galbally S Marcel and J Fierrez ldquoBiometric antispoofingmethods a survey in face recognitionrdquo IEEE Access vol 2 pp1530ndash1552 2014

[2] S Nagpal M Singh R Singh and M Vatsa ldquoRegularizeddeep learning for face recognition with weight variationsrdquo IEEEAccess vol 3 pp 3010ndash3018 2015

12 Mathematical Problems in Engineering

[3] J W Tanaka and D Simonyi ldquoThe parts and wholes of facerecognition a review of the literaturerdquoThe Quarterly Journal ofExperimental Psychology no 10 pp 1ndash37 2016

[4] W Yang Z Wang and C Sun ldquoA collaborative representationbased projectionsmethod for feature extractionrdquoPattern Recog-nition vol 48 no 1 pp 20ndash27 2015

[5] O Boiman E Shechtman and M Irani ldquoIn defense of nearest-neighbor based image classificationrdquo in Proceedings of the 26thIEEE Conference on Computer Vision and Pattern Recognition(CVPR rsquo08) pp 1ndash8 June 2008

[6] K VenkataNarayana V Manoj and K KSwathi ldquoEnhancedface recognition based on PCA and SVMrdquo International Journalof Computer Applications vol 117 no 2 pp 40ndash42 2015

[7] E Owusu and Q R Mao ldquoAn svm cadaboost-based face detec-tion systemrdquo Journal of Experimental amp Theoretical ArtificialIntelligence vol 26 no 4 pp 477ndash491 2014

[8] B V Dasarathy ldquoNearest neighbor (NN) norms NN patternclassification techniquesrdquo Los Alamitos IEEE Computer SocietyPress vol 13 no 100 pp 21ndash27 1991

[9] Y Liu S S Ge C Li and Z You ldquo120581-NS a classifier by thedistance to the nearest subspacerdquo IEEE Transactions on NeuralNetworks and Learning Systems vol 22 no 8 pp 1256ndash12682011

[10] J Hamm and D D Lee ldquoGrassmann discriminant analysis aunifying view on subspace-based learningrdquo in Proceedings ofthe 25th International Conference onMachine Learning pp 376ndash383 July 2008

[11] R Wang S Shan X Chen and W Gao ldquoManifold-manifolddistance with application to face recognition based on imagesetrdquo in Proceedings of the 26th IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo08) pp 1ndash8 June 2008

[12] J Chien and C Wu ldquoDiscriminant waveletfaces and nearestfeature classifiers for face recognitionrdquo IEEE Transactions onPattern Analysis and Machine Intelligence vol 24 no 12 pp1644ndash1649 2002

[13] J Wright A Ganesh Z Zhou A Wagner and Y Ma ldquoDemorobust face recognition via sparse representationrdquo in Proceed-ings of the 8th IEEE International Conference on Automatic Faceamp Gesture Recognition (FG rsquo08) pp 1-2 IEEE Amsterdam TheNetherlands September 2008

[14] W Deng J Hu and J Guo ldquoIn defense of sparsity based facerecognitionrdquo in Proceedings of the 26th IEEE Conference onComputer Vision and Pattern Recognition (CVPR rsquo13) pp 399ndash406 June 2013

[15] L Zhang M Yang and X Feng ldquoSparse representation orcollaborative representation Which helps face recognitionrdquo inProceedings of the IEEE International Conference on ComputerVision (ICCV rsquo11) pp 471ndash478 Barcelona Spain November2011

[16] P Zhu L Zhang Q Hu and S C K Shiu ldquoMulti-scale patchbased collaborative representation for face recognition withmargin distribution optimizationrdquo in European Conference onComputer Vision vol 7572 of Lecture Notes in Computer Sciencepp 822ndash835 2012

[17] D Yang W Chen J Wang and Y Xu ldquoPatch based face recog-nition via fast collaborative representation based classificationand expression insensitive two-stage votingrdquo International Con-ference on Computational Science and Its Applications vol 9787pp 562ndash570 2016

[18] J Lai and X Jiang ldquoClasswise sparse and collaborative patchrepresentation for face recognitionrdquo IEEE Transactions onImage Processing vol 25 no 7 pp 3261ndash3272 2016

[19] M Yang and L Zhang ldquoGabor feature based sparse represen-tation for face recognition with gabor occlusion dictionaryrdquo inProceedings of the 11th European Conference on Computer Vision(ECCV rsquo10) pp 448ndash461 Springer Crete Greece 2010

[20] M R D Rodavia O Bernaldez and M Ballita ldquoWeb andmobile based facial recognition security system using Eigen-faces algorithmrdquo in Proceedings of the 2016 IEEE InternationalConference on Teaching Assessment and Learning for Engineer-ing (TALE rsquo16) pp 86ndash92 Bangkok Thailand December 2016

[21] P N Belhumeur J P Hespanha and D J Kriegman ldquoEigen-faces vs fisherfaces recognition using class specific linearprojectionrdquo IEEE Transactions on Pattern Analysis andMachineIntelligence vol 19 no 7 pp 711ndash720 1997

[22] L P Yang W G Gong W H Li and X H Gu ldquoRandomsampling subspace locality preserving projection for face recog-nitionrdquo Optics amp Precision Engineering vol 16 no 8 pp 1465ndash1470 2008

[23] X Sun M Chen and A Hauptmann ldquoAction recognition vialocal descriptors andholistic featuresrdquo inProceedings of the 2009IEEE Conference on Computer Vision and Pattern Recognition(CVPR rsquo09) pp 58ndash65 June 2009

[24] X Wang Y Zhang X Mu and F S Zhang ldquoThe facerecognition algorithm based on improved lbprdquo InternationalJournal of Optoelectronic Engineering vol 1 no 4 pp 76ndash802012

[25] J Ahmad U Ali and R J Qureshi ldquoFusion of thermal andvisual images for efficient face recognition using gabor filterrdquo inProceedings of the IEEE International Conference on ComputerSystems and Applications pp 135ndash139 March 2006

[26] V Struc R Gajsek and N Pavesic ldquoPrincipal gabor filters forface recognitionrdquo in Proceedings of the IEEE 3rd InternationalConference on Biometrics Theory Applications and Systems(BTAS rsquo09) pp 1ndash6 Washington DC USA September 2009

[27] M Li X Yu K H Ryu S Lee and N Theera-Umpon ldquoFacerecognition technology development with gabor PCA andSVM methodology under illumination normalization condi-tionrdquo Cluster Computing no 3 pp 1ndash10 2017

[28] J Yang D Zhang and A F Frangi ldquoTwo-dimensional PCAa new approach to apperence -based face representationand recognitionrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 26 no 1 pp 131ndash137 2004

[29] H Yu and J Yang ldquoA direct LDA algorithm for high-dimensional data with application to face recognitionrdquo PatternRecognition vol 34 no 10 pp 2067ndash2070 2001

[30] J Kim J Choi J Yi andMTurk ldquoEffective representation usingICA for face recognition robust to local distortion and partialocclusionrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 27 no 12 pp 1977ndash1981 2005

[31] C G Turhan andH S Bilge ldquoClass-wise two-dimensional PCAmethod for face recognitionrdquo IET Computer Vision vol 11 no4 pp 286ndash300 2017

[32] A Mashhoori and M Zolghadri Jahromi ldquoBlock-wise two-directional 2DPCA with ensemble learning for face recogni-tionrdquo Neurocomputing vol 108 pp 111ndash117 2013

[33] M P Rajath Kumar R Keerthi Sravan and K M AishwaryaldquoArtificial neural networks for face recognition using PCA andBPNNrdquo in Proceedings of the 35th IEEE Region 10 Conference(TENCON rsquo15) pp 1ndash6 IEEE Macao China November 2015

[34] E J Candes and T Tao ldquoNear-optimal signal recovery fromrandom projections universal encoding strategiesrdquo Institute ofElectrical and Electronics Engineers Transactions on InformationTheory vol 52 no 12 pp 5406ndash5425 2006

Mathematical Problems in Engineering 13

[35] W U Bajwa J D Haupt G M Raz S J Wright and R DNowak ldquoToeplitz-structured compressed sensing matricesrdquo inProceedings of the IEEESP 14th Workshop on Statistical SignalProcessing (SSP rsquo07) pp 294ndash298 IEEE Madison WI USAAugust 2007

[36] C Zhang H-R Yang and S Wei ldquoCompressive sensing basedon deterministic sparse toeplitz measurement matrices withrandom pitchrdquo Acta Automatica Sinica vol 38 no 8 pp 1362ndash1369 2012

[37] E S Takemura M R Petraglia and A Petraglia ldquoAn imagesuper-resolution algorithm based on wiener filtering andwavelet transformrdquo inProceedings of the 2013 IEEEDigital SignalProcessing and Signal Processing Education Meeting (DSPSPErsquo13) pp 130ndash134 August 2013

[38] M Yang Face Recognition via Sparse Representation WileyEncyclopedia of Electrical and Electronics Engineering 2008

[39] E J Candes J Romberg and T Tao ldquoRobust uncertaintyprinciples exact signal reconstruction from highly incompletefrequency informationrdquo Institute of Electrical and ElectronicsEngineers Transactions on InformationTheory vol 52 no 2 pp489ndash509 2006

[40] A S Georghiades P N Belhumeur and D J Kriegman ldquoFromfew to many illumination cone models for face recognitionunder variable lighting and poserdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 23 no 6 pp 643ndash6602001

[41] N McLaughlin J Ming and D Crookes ldquoLargest matchingareas for illumination and occlusion robust face recognitionrdquoIEEE Transactions on Cybernetics vol 47 no 3 pp 796ndash8082017

[42] Y Xu X Fang J Wu X Li and D Zhang ldquoDiscriminativetransfer subspace learning via low-rank and sparse representa-tionrdquo IEEE Transactions on Image Processing vol 25 no 2 pp850ndash863 2016

[43] H Roy and D Bhattacharjee ldquoLocal-gravity-face (lg-face) forillumination-invariant and heterogeneous face recognitionrdquoIEEE Transactions on Information Forensics and Security vol 11no 7 pp 1412ndash1424 2016

[44] Y Tai J Yang Y Zhang L Luo J Qian and Y ChenldquoFace recognition with pose variations and misalignment viaorthogonal procrustes regressionrdquo IEEE Transactions on ImageProcessing A Publication of the IEEE Signal Processing Societyvol 25 no 6 pp 2673ndash2683 2016

[45] G B Huang M Narayana and E Learned-Miller ldquoLabeledfaces in the wild a database forstudying face recognition inunconstrained environmentsrdquoMonth 2008

[46] LWolf T Hassner and Y Taigman ldquoSimilarity scores based onbackground samplesrdquo in Asian Conference on Computer Visionvol 5995 pp 88ndash97 2009

[47] J Wang Geometric Structure of High-Dimensional Data andDimensionality Reduction Springer 2012

[48] Q Wang J Li and Y Shen ldquoA survey on deterministicmeasurement matrix construction algorithms in compressivesensingrdquo Acta Electronica Sinica vol 41 no 10 pp 2041ndash20502013

[49] WYin SMorgan J Yang andY Zhang ldquoPractical compressivesensing with toeplitz and circulant matricesrdquo in Proceedingsof the Visual Communications and Image Processing 2010 vol7744 July 2010

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 10: SUBMITTED TO MATHEMATICAL PROBLEMS IN …downloads.hindawi.com/journals/mpe/aip/3025264.pdf · SUBMITTED TO MATHEMATICAL PROBLEMS IN ENGINEERING 1 Patch based Collaborative Representation

10 Mathematical Problems in Engineering

Table 3 Recognition rate () on CMU PIE database

Features Method Training sample size2 3 4 5

Intensity feature

SRC 2821 plusmn 543 3936 plusmn 982 4473 plusmn 1529 6054 plusmn 645CRC 3034 plusmn 615 4062 plusmn 920 4435 plusmn 742 6103 plusmn 782SSRC 3752 plusmn 834 5474 plusmn 850 5732 plusmn 982 6673 plusmn 885PCRC 4232 plusmn 117 5646 plusmn 188 6116 plusmn 291 7047 plusmn 387

LBP feature

SRC 3842 plusmn 646 5337 plusmn 710 5367 plusmn 521 6331 plusmn 724CRC 3732 plusmn 775 5483 plusmn 615 5427 plusmn 831 6425 plusmn 690SSRC 4317 plusmn 571 5684 plusmn 921 6453 plusmn 476 6825 plusmn 435PCRC 4121 plusmn 710 5221 plusmn 1176 6254 plusmn 621 7022 plusmn 307

Gabor feature

SRC 4143 plusmn 420 5336 plusmn 540 5358 plusmn 732 6679 plusmn 545CRC 4025 plusmn 515 5535 plusmn 328 5668 plusmn 742 6624 plusmn 618SSRC 4455 plusmn 600 5596 plusmn 620 6250 plusmn 316 7041 plusmn 479PCRC 4857 plusmn 129 5845 plusmn 194 6882 plusmn 256 7318 plusmn 244

for GPCRC Similarly we can see that PCA cannot achievethe best recognition rate because of the sample size limitand at 1024 dimension the performance of the measurementmatrix has basically achieved the performance of the originaldimensionThe recognition rate of eachmeasurementmatrixis shown below

(i) Random Gaussian matrices are 4873 (2 trainingsamples) 5966 (3 training samples) 6886 (4training samples) and 7360 (5 training samples)

(ii) Toeplitz and cyclic matrices are 4889 (2 trainingsamples) 5830 (3 training samples) 6544 (4training samples) and 7312 (5 training samples)

(iii) Deterministic sparse Toeplitz matrices (Δ = 4)are 4763 (2 training samples) 5868 (3 trainingsamples) 6703 (4 training samples) and 7178 (5training samples)

(iv) Deterministic sparse Toeplitz matrices (Δ = 8)are 4869 (2 training samples) 5855 (3 trainingsamples) 6891 (4 training samples) and 7427 (5training samples)

(v) Deterministic sparse Toeplitz matrices (Δ = 16)are 4694 (2 training samples) 6000 (3 trainingsamples) 6625 (4 training samples) and 7169 (5training samples)

Through the above analysis we further validated the feasibil-ity of the measurement matrix in the premise of ensuring therecognition rate

43 LFW In practice the number of training samples is verylimited and only one or two can be obtained from individualidentity documents To simulate the actual situationwe selectthe LFWdatabaseThe LFWdatabase includes frontal imagesof 5749 different subjects in unconstrained environment[45] LFW-a is a version of LFW after alignment using com-mercial face alignment software [46] We chose 158 subjectsfrom LFW-a each subject contains no less than ten samplesFor each subject we randomly choose 1 to 2 samples fromeach individual for training and another 5 samples from each

Figure 8 Some marked samples of the LFW database

individual for testing The face images size is downsampledto 32 times 32 Figure 8 shows some marked samples of the LFWdatabaseThe experimental results of eachmethod are shownin Table 4 it can be clearly seen that the GPCRC achievesthe highest recognition rate performance in all experimentswith the training sample size from 1 to 2 In Figure 9 wecan also see the advantages of the measurement matrix insmall sample size At 1024 dimension the performance ofthe measurement matrix in the single training sample isas follows Random Gaussian matrices are 2618 Toeplitzand cyclic matrices are 2468 Deterministic sparse Toeplitzmatrices (Δ = 4) are 2443 Deterministic sparse Toeplitzmatrices (Δ = 8) are 2585 and Deterministic sparseToeplitz matrices (Δ = 16) are 2528 and the performancesof the measurement matrix in the two training samples areas follows Random Gaussian matrices are 3968 Toeplitzand cyclic matrices are 3772 Deterministic sparse Toeplitzmatrices (Δ = 4) are 3756 Deterministic sparse Toeplitzmatrices (Δ = 8) are 3991 and Deterministic sparseToeplitz matrices (Δ = 16) are 3825

5 Conclusion

In order to alleviate the influence of unreliability environmenton small sample size in FR in this paper we applied improvedmethod for Gabor local features to PCRC we proposedto use the measurement matrices to reduce the dimensionof the transformed signal Several important observationscan be summarized as follows (1) The proposed GPCPCmethod can effectively deal with the influence of unreliabilityenvironment in small sample FR (2) The measurement

Mathematical Problems in Engineering 11

Table 4 Recognition rate () on LFW database

Features Method Training sample size1 2

Intensity feature

SRC 1058 plusmn 720 1902 plusmn 829CRC 908 plusmn 815 2039 plusmn 720SSRC 1703 plusmn 587 2852 plusmn 750PCRC 2229 plusmn 396 3329 plusmn 528

LBP feature

SRC 1552 plusmn 554 2531 plusmn 742CRC 1623 plusmn 621 2622 plusmn 657SSRC 1916 plusmn 341 3210 plusmn 426PCRC 2126 plusmn 473 3159 plusmn 611

Gabor feature

SRC 1868 plusmn 423 3135 plusmn 640CRC 1731 plusmn 815 3083 plusmn 520SSRC 2130 plusmn 721 3367 plusmn 433PCRC 2582 plusmn 342 3960 plusmn 354

0

5

10

15

20

25

30

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(a) Single training sample

0

5

10

15

20

25

30

35

40

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 10001100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(b) 2 training samples

Figure 9 The performance of each dimension reduction method for GPCRC on LFW database

matrix proposed to deal with the high dimension in GPCRCmethod can effectively improve the computational efficiencyand computational speed and they were able to overcome thelimitations of the PCA method

Conflicts of Interest

None of the authors have conflicts of interest

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China (no 61672386) Humanities and Social

Sciences Planning Project of Ministry of Education (no16YJAZH071) Anhui Provincial Natural Science Foundationof China (no 1708085MF142) University Natural ScienceResearch Project of Anhui Province (no KJ2017A259) andProvincial Quality Project of Anhui Province (no 2016zy131)

References

[1] J Galbally S Marcel and J Fierrez ldquoBiometric antispoofingmethods a survey in face recognitionrdquo IEEE Access vol 2 pp1530ndash1552 2014

[2] S Nagpal M Singh R Singh and M Vatsa ldquoRegularizeddeep learning for face recognition with weight variationsrdquo IEEEAccess vol 3 pp 3010ndash3018 2015

12 Mathematical Problems in Engineering

[3] J W Tanaka and D Simonyi ldquoThe parts and wholes of facerecognition a review of the literaturerdquoThe Quarterly Journal ofExperimental Psychology no 10 pp 1ndash37 2016

[4] W Yang Z Wang and C Sun ldquoA collaborative representationbased projectionsmethod for feature extractionrdquoPattern Recog-nition vol 48 no 1 pp 20ndash27 2015

[5] O Boiman E Shechtman and M Irani ldquoIn defense of nearest-neighbor based image classificationrdquo in Proceedings of the 26thIEEE Conference on Computer Vision and Pattern Recognition(CVPR rsquo08) pp 1ndash8 June 2008

[6] K VenkataNarayana V Manoj and K KSwathi ldquoEnhancedface recognition based on PCA and SVMrdquo International Journalof Computer Applications vol 117 no 2 pp 40ndash42 2015

[7] E Owusu and Q R Mao ldquoAn svm cadaboost-based face detec-tion systemrdquo Journal of Experimental amp Theoretical ArtificialIntelligence vol 26 no 4 pp 477ndash491 2014

[8] B V Dasarathy ldquoNearest neighbor (NN) norms NN patternclassification techniquesrdquo Los Alamitos IEEE Computer SocietyPress vol 13 no 100 pp 21ndash27 1991

[9] Y Liu S S Ge C Li and Z You ldquo120581-NS a classifier by thedistance to the nearest subspacerdquo IEEE Transactions on NeuralNetworks and Learning Systems vol 22 no 8 pp 1256ndash12682011

[10] J Hamm and D D Lee ldquoGrassmann discriminant analysis aunifying view on subspace-based learningrdquo in Proceedings ofthe 25th International Conference onMachine Learning pp 376ndash383 July 2008

[11] R Wang S Shan X Chen and W Gao ldquoManifold-manifolddistance with application to face recognition based on imagesetrdquo in Proceedings of the 26th IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo08) pp 1ndash8 June 2008

[12] J Chien and C Wu ldquoDiscriminant waveletfaces and nearestfeature classifiers for face recognitionrdquo IEEE Transactions onPattern Analysis and Machine Intelligence vol 24 no 12 pp1644ndash1649 2002

[13] J Wright A Ganesh Z Zhou A Wagner and Y Ma ldquoDemorobust face recognition via sparse representationrdquo in Proceed-ings of the 8th IEEE International Conference on Automatic Faceamp Gesture Recognition (FG rsquo08) pp 1-2 IEEE Amsterdam TheNetherlands September 2008

[14] W Deng J Hu and J Guo ldquoIn defense of sparsity based facerecognitionrdquo in Proceedings of the 26th IEEE Conference onComputer Vision and Pattern Recognition (CVPR rsquo13) pp 399ndash406 June 2013

[15] L Zhang M Yang and X Feng ldquoSparse representation orcollaborative representation Which helps face recognitionrdquo inProceedings of the IEEE International Conference on ComputerVision (ICCV rsquo11) pp 471ndash478 Barcelona Spain November2011

[16] P Zhu L Zhang Q Hu and S C K Shiu ldquoMulti-scale patchbased collaborative representation for face recognition withmargin distribution optimizationrdquo in European Conference onComputer Vision vol 7572 of Lecture Notes in Computer Sciencepp 822ndash835 2012

[17] D Yang W Chen J Wang and Y Xu ldquoPatch based face recog-nition via fast collaborative representation based classificationand expression insensitive two-stage votingrdquo International Con-ference on Computational Science and Its Applications vol 9787pp 562ndash570 2016

[18] J Lai and X Jiang ldquoClasswise sparse and collaborative patchrepresentation for face recognitionrdquo IEEE Transactions onImage Processing vol 25 no 7 pp 3261ndash3272 2016

[19] M Yang and L Zhang ldquoGabor feature based sparse represen-tation for face recognition with gabor occlusion dictionaryrdquo inProceedings of the 11th European Conference on Computer Vision(ECCV rsquo10) pp 448ndash461 Springer Crete Greece 2010

[20] M R D Rodavia O Bernaldez and M Ballita ldquoWeb andmobile based facial recognition security system using Eigen-faces algorithmrdquo in Proceedings of the 2016 IEEE InternationalConference on Teaching Assessment and Learning for Engineer-ing (TALE rsquo16) pp 86ndash92 Bangkok Thailand December 2016

[21] P N Belhumeur J P Hespanha and D J Kriegman ldquoEigen-faces vs fisherfaces recognition using class specific linearprojectionrdquo IEEE Transactions on Pattern Analysis andMachineIntelligence vol 19 no 7 pp 711ndash720 1997

[22] L P Yang W G Gong W H Li and X H Gu ldquoRandomsampling subspace locality preserving projection for face recog-nitionrdquo Optics amp Precision Engineering vol 16 no 8 pp 1465ndash1470 2008

[23] X Sun M Chen and A Hauptmann ldquoAction recognition vialocal descriptors andholistic featuresrdquo inProceedings of the 2009IEEE Conference on Computer Vision and Pattern Recognition(CVPR rsquo09) pp 58ndash65 June 2009

[24] X Wang Y Zhang X Mu and F S Zhang ldquoThe facerecognition algorithm based on improved lbprdquo InternationalJournal of Optoelectronic Engineering vol 1 no 4 pp 76ndash802012

[25] J Ahmad U Ali and R J Qureshi ldquoFusion of thermal andvisual images for efficient face recognition using gabor filterrdquo inProceedings of the IEEE International Conference on ComputerSystems and Applications pp 135ndash139 March 2006

[26] V Struc R Gajsek and N Pavesic ldquoPrincipal gabor filters forface recognitionrdquo in Proceedings of the IEEE 3rd InternationalConference on Biometrics Theory Applications and Systems(BTAS rsquo09) pp 1ndash6 Washington DC USA September 2009

[27] M Li X Yu K H Ryu S Lee and N Theera-Umpon ldquoFacerecognition technology development with gabor PCA andSVM methodology under illumination normalization condi-tionrdquo Cluster Computing no 3 pp 1ndash10 2017

[28] J Yang D Zhang and A F Frangi ldquoTwo-dimensional PCAa new approach to apperence -based face representationand recognitionrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 26 no 1 pp 131ndash137 2004

[29] H Yu and J Yang ldquoA direct LDA algorithm for high-dimensional data with application to face recognitionrdquo PatternRecognition vol 34 no 10 pp 2067ndash2070 2001

[30] J Kim J Choi J Yi andMTurk ldquoEffective representation usingICA for face recognition robust to local distortion and partialocclusionrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 27 no 12 pp 1977ndash1981 2005

[31] C G Turhan andH S Bilge ldquoClass-wise two-dimensional PCAmethod for face recognitionrdquo IET Computer Vision vol 11 no4 pp 286ndash300 2017

[32] A Mashhoori and M Zolghadri Jahromi ldquoBlock-wise two-directional 2DPCA with ensemble learning for face recogni-tionrdquo Neurocomputing vol 108 pp 111ndash117 2013

[33] M P Rajath Kumar R Keerthi Sravan and K M AishwaryaldquoArtificial neural networks for face recognition using PCA andBPNNrdquo in Proceedings of the 35th IEEE Region 10 Conference(TENCON rsquo15) pp 1ndash6 IEEE Macao China November 2015

[34] E J Candes and T Tao ldquoNear-optimal signal recovery fromrandom projections universal encoding strategiesrdquo Institute ofElectrical and Electronics Engineers Transactions on InformationTheory vol 52 no 12 pp 5406ndash5425 2006

Mathematical Problems in Engineering 13

[35] W U Bajwa J D Haupt G M Raz S J Wright and R DNowak ldquoToeplitz-structured compressed sensing matricesrdquo inProceedings of the IEEESP 14th Workshop on Statistical SignalProcessing (SSP rsquo07) pp 294ndash298 IEEE Madison WI USAAugust 2007

[36] C Zhang H-R Yang and S Wei ldquoCompressive sensing basedon deterministic sparse toeplitz measurement matrices withrandom pitchrdquo Acta Automatica Sinica vol 38 no 8 pp 1362ndash1369 2012

[37] E S Takemura M R Petraglia and A Petraglia ldquoAn imagesuper-resolution algorithm based on wiener filtering andwavelet transformrdquo inProceedings of the 2013 IEEEDigital SignalProcessing and Signal Processing Education Meeting (DSPSPErsquo13) pp 130ndash134 August 2013

[38] M Yang Face Recognition via Sparse Representation WileyEncyclopedia of Electrical and Electronics Engineering 2008

[39] E J Candes J Romberg and T Tao ldquoRobust uncertaintyprinciples exact signal reconstruction from highly incompletefrequency informationrdquo Institute of Electrical and ElectronicsEngineers Transactions on InformationTheory vol 52 no 2 pp489ndash509 2006

[40] A S Georghiades P N Belhumeur and D J Kriegman ldquoFromfew to many illumination cone models for face recognitionunder variable lighting and poserdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 23 no 6 pp 643ndash6602001

[41] N McLaughlin J Ming and D Crookes ldquoLargest matchingareas for illumination and occlusion robust face recognitionrdquoIEEE Transactions on Cybernetics vol 47 no 3 pp 796ndash8082017

[42] Y Xu X Fang J Wu X Li and D Zhang ldquoDiscriminativetransfer subspace learning via low-rank and sparse representa-tionrdquo IEEE Transactions on Image Processing vol 25 no 2 pp850ndash863 2016

[43] H Roy and D Bhattacharjee ldquoLocal-gravity-face (lg-face) forillumination-invariant and heterogeneous face recognitionrdquoIEEE Transactions on Information Forensics and Security vol 11no 7 pp 1412ndash1424 2016

[44] Y Tai J Yang Y Zhang L Luo J Qian and Y ChenldquoFace recognition with pose variations and misalignment viaorthogonal procrustes regressionrdquo IEEE Transactions on ImageProcessing A Publication of the IEEE Signal Processing Societyvol 25 no 6 pp 2673ndash2683 2016

[45] G B Huang M Narayana and E Learned-Miller ldquoLabeledfaces in the wild a database forstudying face recognition inunconstrained environmentsrdquoMonth 2008

[46] LWolf T Hassner and Y Taigman ldquoSimilarity scores based onbackground samplesrdquo in Asian Conference on Computer Visionvol 5995 pp 88ndash97 2009

[47] J Wang Geometric Structure of High-Dimensional Data andDimensionality Reduction Springer 2012

[48] Q Wang J Li and Y Shen ldquoA survey on deterministicmeasurement matrix construction algorithms in compressivesensingrdquo Acta Electronica Sinica vol 41 no 10 pp 2041ndash20502013

[49] WYin SMorgan J Yang andY Zhang ldquoPractical compressivesensing with toeplitz and circulant matricesrdquo in Proceedingsof the Visual Communications and Image Processing 2010 vol7744 July 2010

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 11: SUBMITTED TO MATHEMATICAL PROBLEMS IN …downloads.hindawi.com/journals/mpe/aip/3025264.pdf · SUBMITTED TO MATHEMATICAL PROBLEMS IN ENGINEERING 1 Patch based Collaborative Representation

Mathematical Problems in Engineering 11

Table 4 Recognition rate () on LFW database

Features Method Training sample size1 2

Intensity feature

SRC 1058 plusmn 720 1902 plusmn 829CRC 908 plusmn 815 2039 plusmn 720SSRC 1703 plusmn 587 2852 plusmn 750PCRC 2229 plusmn 396 3329 plusmn 528

LBP feature

SRC 1552 plusmn 554 2531 plusmn 742CRC 1623 plusmn 621 2622 plusmn 657SSRC 1916 plusmn 341 3210 plusmn 426PCRC 2126 plusmn 473 3159 plusmn 611

Gabor feature

SRC 1868 plusmn 423 3135 plusmn 640CRC 1731 plusmn 815 3083 plusmn 520SSRC 2130 plusmn 721 3367 plusmn 433PCRC 2582 plusmn 342 3960 plusmn 354

0

5

10

15

20

25

30

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 1000 1100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(a) Single training sample

0

5

10

15

20

25

30

35

40

Feature Dimension

Reco

gniti

on ra

te (

)

Random Gaussian matricesToeplitz and cyclic matrices

PCA

0 100 200 300 400 500 600 700 800 900 10001100

Deterministic sparse Toeplitz matrices (Δ = 4)Deterministic sparse Toeplitz matrices (Δ = 8)Deterministic sparse Toeplitz matrices (Δ = 16)

(b) 2 training samples

Figure 9 The performance of each dimension reduction method for GPCRC on LFW database

matrix proposed to deal with the high dimension in GPCRCmethod can effectively improve the computational efficiencyand computational speed and they were able to overcome thelimitations of the PCA method

Conflicts of Interest

None of the authors have conflicts of interest

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China (no 61672386) Humanities and Social

Sciences Planning Project of Ministry of Education (no16YJAZH071) Anhui Provincial Natural Science Foundationof China (no 1708085MF142) University Natural ScienceResearch Project of Anhui Province (no KJ2017A259) andProvincial Quality Project of Anhui Province (no 2016zy131)

References

[1] J Galbally S Marcel and J Fierrez ldquoBiometric antispoofingmethods a survey in face recognitionrdquo IEEE Access vol 2 pp1530ndash1552 2014

[2] S Nagpal M Singh R Singh and M Vatsa ldquoRegularizeddeep learning for face recognition with weight variationsrdquo IEEEAccess vol 3 pp 3010ndash3018 2015

12 Mathematical Problems in Engineering

[3] J W Tanaka and D Simonyi ldquoThe parts and wholes of facerecognition a review of the literaturerdquoThe Quarterly Journal ofExperimental Psychology no 10 pp 1ndash37 2016

[4] W Yang Z Wang and C Sun ldquoA collaborative representationbased projectionsmethod for feature extractionrdquoPattern Recog-nition vol 48 no 1 pp 20ndash27 2015

[5] O Boiman E Shechtman and M Irani ldquoIn defense of nearest-neighbor based image classificationrdquo in Proceedings of the 26thIEEE Conference on Computer Vision and Pattern Recognition(CVPR rsquo08) pp 1ndash8 June 2008

[6] K VenkataNarayana V Manoj and K KSwathi ldquoEnhancedface recognition based on PCA and SVMrdquo International Journalof Computer Applications vol 117 no 2 pp 40ndash42 2015

[7] E Owusu and Q R Mao ldquoAn svm cadaboost-based face detec-tion systemrdquo Journal of Experimental amp Theoretical ArtificialIntelligence vol 26 no 4 pp 477ndash491 2014

[8] B V Dasarathy ldquoNearest neighbor (NN) norms NN patternclassification techniquesrdquo Los Alamitos IEEE Computer SocietyPress vol 13 no 100 pp 21ndash27 1991

[9] Y Liu S S Ge C Li and Z You ldquo120581-NS a classifier by thedistance to the nearest subspacerdquo IEEE Transactions on NeuralNetworks and Learning Systems vol 22 no 8 pp 1256ndash12682011

[10] J Hamm and D D Lee ldquoGrassmann discriminant analysis aunifying view on subspace-based learningrdquo in Proceedings ofthe 25th International Conference onMachine Learning pp 376ndash383 July 2008

[11] R Wang S Shan X Chen and W Gao ldquoManifold-manifolddistance with application to face recognition based on imagesetrdquo in Proceedings of the 26th IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo08) pp 1ndash8 June 2008

[12] J Chien and C Wu ldquoDiscriminant waveletfaces and nearestfeature classifiers for face recognitionrdquo IEEE Transactions onPattern Analysis and Machine Intelligence vol 24 no 12 pp1644ndash1649 2002

[13] J Wright A Ganesh Z Zhou A Wagner and Y Ma ldquoDemorobust face recognition via sparse representationrdquo in Proceed-ings of the 8th IEEE International Conference on Automatic Faceamp Gesture Recognition (FG rsquo08) pp 1-2 IEEE Amsterdam TheNetherlands September 2008

[14] W Deng J Hu and J Guo ldquoIn defense of sparsity based facerecognitionrdquo in Proceedings of the 26th IEEE Conference onComputer Vision and Pattern Recognition (CVPR rsquo13) pp 399ndash406 June 2013

[15] L Zhang M Yang and X Feng ldquoSparse representation orcollaborative representation Which helps face recognitionrdquo inProceedings of the IEEE International Conference on ComputerVision (ICCV rsquo11) pp 471ndash478 Barcelona Spain November2011

[16] P Zhu L Zhang Q Hu and S C K Shiu ldquoMulti-scale patchbased collaborative representation for face recognition withmargin distribution optimizationrdquo in European Conference onComputer Vision vol 7572 of Lecture Notes in Computer Sciencepp 822ndash835 2012

[17] D Yang W Chen J Wang and Y Xu ldquoPatch based face recog-nition via fast collaborative representation based classificationand expression insensitive two-stage votingrdquo International Con-ference on Computational Science and Its Applications vol 9787pp 562ndash570 2016

[18] J Lai and X Jiang ldquoClasswise sparse and collaborative patchrepresentation for face recognitionrdquo IEEE Transactions onImage Processing vol 25 no 7 pp 3261ndash3272 2016

[19] M Yang and L Zhang ldquoGabor feature based sparse represen-tation for face recognition with gabor occlusion dictionaryrdquo inProceedings of the 11th European Conference on Computer Vision(ECCV rsquo10) pp 448ndash461 Springer Crete Greece 2010

[20] M R D Rodavia O Bernaldez and M Ballita ldquoWeb andmobile based facial recognition security system using Eigen-faces algorithmrdquo in Proceedings of the 2016 IEEE InternationalConference on Teaching Assessment and Learning for Engineer-ing (TALE rsquo16) pp 86ndash92 Bangkok Thailand December 2016

[21] P N Belhumeur J P Hespanha and D J Kriegman ldquoEigen-faces vs fisherfaces recognition using class specific linearprojectionrdquo IEEE Transactions on Pattern Analysis andMachineIntelligence vol 19 no 7 pp 711ndash720 1997

[22] L P Yang W G Gong W H Li and X H Gu ldquoRandomsampling subspace locality preserving projection for face recog-nitionrdquo Optics amp Precision Engineering vol 16 no 8 pp 1465ndash1470 2008

[23] X Sun M Chen and A Hauptmann ldquoAction recognition vialocal descriptors andholistic featuresrdquo inProceedings of the 2009IEEE Conference on Computer Vision and Pattern Recognition(CVPR rsquo09) pp 58ndash65 June 2009

[24] X Wang Y Zhang X Mu and F S Zhang ldquoThe facerecognition algorithm based on improved lbprdquo InternationalJournal of Optoelectronic Engineering vol 1 no 4 pp 76ndash802012

[25] J Ahmad U Ali and R J Qureshi ldquoFusion of thermal andvisual images for efficient face recognition using gabor filterrdquo inProceedings of the IEEE International Conference on ComputerSystems and Applications pp 135ndash139 March 2006

[26] V Struc R Gajsek and N Pavesic ldquoPrincipal gabor filters forface recognitionrdquo in Proceedings of the IEEE 3rd InternationalConference on Biometrics Theory Applications and Systems(BTAS rsquo09) pp 1ndash6 Washington DC USA September 2009

[27] M Li X Yu K H Ryu S Lee and N Theera-Umpon ldquoFacerecognition technology development with gabor PCA andSVM methodology under illumination normalization condi-tionrdquo Cluster Computing no 3 pp 1ndash10 2017

[28] J Yang D Zhang and A F Frangi ldquoTwo-dimensional PCAa new approach to apperence -based face representationand recognitionrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 26 no 1 pp 131ndash137 2004

[29] H Yu and J Yang ldquoA direct LDA algorithm for high-dimensional data with application to face recognitionrdquo PatternRecognition vol 34 no 10 pp 2067ndash2070 2001

[30] J Kim J Choi J Yi andMTurk ldquoEffective representation usingICA for face recognition robust to local distortion and partialocclusionrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 27 no 12 pp 1977ndash1981 2005

[31] C G Turhan andH S Bilge ldquoClass-wise two-dimensional PCAmethod for face recognitionrdquo IET Computer Vision vol 11 no4 pp 286ndash300 2017

[32] A Mashhoori and M Zolghadri Jahromi ldquoBlock-wise two-directional 2DPCA with ensemble learning for face recogni-tionrdquo Neurocomputing vol 108 pp 111ndash117 2013

[33] M P Rajath Kumar R Keerthi Sravan and K M AishwaryaldquoArtificial neural networks for face recognition using PCA andBPNNrdquo in Proceedings of the 35th IEEE Region 10 Conference(TENCON rsquo15) pp 1ndash6 IEEE Macao China November 2015

[34] E J Candes and T Tao ldquoNear-optimal signal recovery fromrandom projections universal encoding strategiesrdquo Institute ofElectrical and Electronics Engineers Transactions on InformationTheory vol 52 no 12 pp 5406ndash5425 2006

Mathematical Problems in Engineering 13

[35] W U Bajwa J D Haupt G M Raz S J Wright and R DNowak ldquoToeplitz-structured compressed sensing matricesrdquo inProceedings of the IEEESP 14th Workshop on Statistical SignalProcessing (SSP rsquo07) pp 294ndash298 IEEE Madison WI USAAugust 2007

[36] C Zhang H-R Yang and S Wei ldquoCompressive sensing basedon deterministic sparse toeplitz measurement matrices withrandom pitchrdquo Acta Automatica Sinica vol 38 no 8 pp 1362ndash1369 2012

[37] E S Takemura M R Petraglia and A Petraglia ldquoAn imagesuper-resolution algorithm based on wiener filtering andwavelet transformrdquo inProceedings of the 2013 IEEEDigital SignalProcessing and Signal Processing Education Meeting (DSPSPErsquo13) pp 130ndash134 August 2013

[38] M Yang Face Recognition via Sparse Representation WileyEncyclopedia of Electrical and Electronics Engineering 2008

[39] E J Candes J Romberg and T Tao ldquoRobust uncertaintyprinciples exact signal reconstruction from highly incompletefrequency informationrdquo Institute of Electrical and ElectronicsEngineers Transactions on InformationTheory vol 52 no 2 pp489ndash509 2006

[40] A S Georghiades P N Belhumeur and D J Kriegman ldquoFromfew to many illumination cone models for face recognitionunder variable lighting and poserdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 23 no 6 pp 643ndash6602001

[41] N McLaughlin J Ming and D Crookes ldquoLargest matchingareas for illumination and occlusion robust face recognitionrdquoIEEE Transactions on Cybernetics vol 47 no 3 pp 796ndash8082017

[42] Y Xu X Fang J Wu X Li and D Zhang ldquoDiscriminativetransfer subspace learning via low-rank and sparse representa-tionrdquo IEEE Transactions on Image Processing vol 25 no 2 pp850ndash863 2016

[43] H Roy and D Bhattacharjee ldquoLocal-gravity-face (lg-face) forillumination-invariant and heterogeneous face recognitionrdquoIEEE Transactions on Information Forensics and Security vol 11no 7 pp 1412ndash1424 2016

[44] Y Tai J Yang Y Zhang L Luo J Qian and Y ChenldquoFace recognition with pose variations and misalignment viaorthogonal procrustes regressionrdquo IEEE Transactions on ImageProcessing A Publication of the IEEE Signal Processing Societyvol 25 no 6 pp 2673ndash2683 2016

[45] G B Huang M Narayana and E Learned-Miller ldquoLabeledfaces in the wild a database forstudying face recognition inunconstrained environmentsrdquoMonth 2008

[46] LWolf T Hassner and Y Taigman ldquoSimilarity scores based onbackground samplesrdquo in Asian Conference on Computer Visionvol 5995 pp 88ndash97 2009

[47] J Wang Geometric Structure of High-Dimensional Data andDimensionality Reduction Springer 2012

[48] Q Wang J Li and Y Shen ldquoA survey on deterministicmeasurement matrix construction algorithms in compressivesensingrdquo Acta Electronica Sinica vol 41 no 10 pp 2041ndash20502013

[49] WYin SMorgan J Yang andY Zhang ldquoPractical compressivesensing with toeplitz and circulant matricesrdquo in Proceedingsof the Visual Communications and Image Processing 2010 vol7744 July 2010

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 12: SUBMITTED TO MATHEMATICAL PROBLEMS IN …downloads.hindawi.com/journals/mpe/aip/3025264.pdf · SUBMITTED TO MATHEMATICAL PROBLEMS IN ENGINEERING 1 Patch based Collaborative Representation

12 Mathematical Problems in Engineering

[3] J W Tanaka and D Simonyi ldquoThe parts and wholes of facerecognition a review of the literaturerdquoThe Quarterly Journal ofExperimental Psychology no 10 pp 1ndash37 2016

[4] W Yang Z Wang and C Sun ldquoA collaborative representationbased projectionsmethod for feature extractionrdquoPattern Recog-nition vol 48 no 1 pp 20ndash27 2015

[5] O Boiman E Shechtman and M Irani ldquoIn defense of nearest-neighbor based image classificationrdquo in Proceedings of the 26thIEEE Conference on Computer Vision and Pattern Recognition(CVPR rsquo08) pp 1ndash8 June 2008

[6] K VenkataNarayana V Manoj and K KSwathi ldquoEnhancedface recognition based on PCA and SVMrdquo International Journalof Computer Applications vol 117 no 2 pp 40ndash42 2015

[7] E Owusu and Q R Mao ldquoAn svm cadaboost-based face detec-tion systemrdquo Journal of Experimental amp Theoretical ArtificialIntelligence vol 26 no 4 pp 477ndash491 2014

[8] B V Dasarathy ldquoNearest neighbor (NN) norms NN patternclassification techniquesrdquo Los Alamitos IEEE Computer SocietyPress vol 13 no 100 pp 21ndash27 1991

[9] Y Liu S S Ge C Li and Z You ldquo120581-NS a classifier by thedistance to the nearest subspacerdquo IEEE Transactions on NeuralNetworks and Learning Systems vol 22 no 8 pp 1256ndash12682011

[10] J Hamm and D D Lee ldquoGrassmann discriminant analysis aunifying view on subspace-based learningrdquo in Proceedings ofthe 25th International Conference onMachine Learning pp 376ndash383 July 2008

[11] R Wang S Shan X Chen and W Gao ldquoManifold-manifolddistance with application to face recognition based on imagesetrdquo in Proceedings of the 26th IEEE Conference on ComputerVision and Pattern Recognition (CVPR rsquo08) pp 1ndash8 June 2008

[12] J Chien and C Wu ldquoDiscriminant waveletfaces and nearestfeature classifiers for face recognitionrdquo IEEE Transactions onPattern Analysis and Machine Intelligence vol 24 no 12 pp1644ndash1649 2002

[13] J Wright A Ganesh Z Zhou A Wagner and Y Ma ldquoDemorobust face recognition via sparse representationrdquo in Proceed-ings of the 8th IEEE International Conference on Automatic Faceamp Gesture Recognition (FG rsquo08) pp 1-2 IEEE Amsterdam TheNetherlands September 2008

[14] W Deng J Hu and J Guo ldquoIn defense of sparsity based facerecognitionrdquo in Proceedings of the 26th IEEE Conference onComputer Vision and Pattern Recognition (CVPR rsquo13) pp 399ndash406 June 2013

[15] L Zhang M Yang and X Feng ldquoSparse representation orcollaborative representation Which helps face recognitionrdquo inProceedings of the IEEE International Conference on ComputerVision (ICCV rsquo11) pp 471ndash478 Barcelona Spain November2011

[16] P Zhu L Zhang Q Hu and S C K Shiu ldquoMulti-scale patchbased collaborative representation for face recognition withmargin distribution optimizationrdquo in European Conference onComputer Vision vol 7572 of Lecture Notes in Computer Sciencepp 822ndash835 2012

[17] D Yang W Chen J Wang and Y Xu ldquoPatch based face recog-nition via fast collaborative representation based classificationand expression insensitive two-stage votingrdquo International Con-ference on Computational Science and Its Applications vol 9787pp 562ndash570 2016

[18] J Lai and X Jiang ldquoClasswise sparse and collaborative patchrepresentation for face recognitionrdquo IEEE Transactions onImage Processing vol 25 no 7 pp 3261ndash3272 2016

[19] M Yang and L Zhang ldquoGabor feature based sparse represen-tation for face recognition with gabor occlusion dictionaryrdquo inProceedings of the 11th European Conference on Computer Vision(ECCV rsquo10) pp 448ndash461 Springer Crete Greece 2010

[20] M R D Rodavia O Bernaldez and M Ballita ldquoWeb andmobile based facial recognition security system using Eigen-faces algorithmrdquo in Proceedings of the 2016 IEEE InternationalConference on Teaching Assessment and Learning for Engineer-ing (TALE rsquo16) pp 86ndash92 Bangkok Thailand December 2016

[21] P N Belhumeur J P Hespanha and D J Kriegman ldquoEigen-faces vs fisherfaces recognition using class specific linearprojectionrdquo IEEE Transactions on Pattern Analysis andMachineIntelligence vol 19 no 7 pp 711ndash720 1997

[22] L P Yang W G Gong W H Li and X H Gu ldquoRandomsampling subspace locality preserving projection for face recog-nitionrdquo Optics amp Precision Engineering vol 16 no 8 pp 1465ndash1470 2008

[23] X Sun M Chen and A Hauptmann ldquoAction recognition vialocal descriptors andholistic featuresrdquo inProceedings of the 2009IEEE Conference on Computer Vision and Pattern Recognition(CVPR rsquo09) pp 58ndash65 June 2009

[24] X Wang Y Zhang X Mu and F S Zhang ldquoThe facerecognition algorithm based on improved lbprdquo InternationalJournal of Optoelectronic Engineering vol 1 no 4 pp 76ndash802012

[25] J Ahmad U Ali and R J Qureshi ldquoFusion of thermal andvisual images for efficient face recognition using gabor filterrdquo inProceedings of the IEEE International Conference on ComputerSystems and Applications pp 135ndash139 March 2006

[26] V Struc R Gajsek and N Pavesic ldquoPrincipal gabor filters forface recognitionrdquo in Proceedings of the IEEE 3rd InternationalConference on Biometrics Theory Applications and Systems(BTAS rsquo09) pp 1ndash6 Washington DC USA September 2009

[27] M Li X Yu K H Ryu S Lee and N Theera-Umpon ldquoFacerecognition technology development with gabor PCA andSVM methodology under illumination normalization condi-tionrdquo Cluster Computing no 3 pp 1ndash10 2017

[28] J Yang D Zhang and A F Frangi ldquoTwo-dimensional PCAa new approach to apperence -based face representationand recognitionrdquo IEEE Transactions on Pattern Analysis andMachine Intelligence vol 26 no 1 pp 131ndash137 2004

[29] H Yu and J Yang ldquoA direct LDA algorithm for high-dimensional data with application to face recognitionrdquo PatternRecognition vol 34 no 10 pp 2067ndash2070 2001

[30] J Kim J Choi J Yi andMTurk ldquoEffective representation usingICA for face recognition robust to local distortion and partialocclusionrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 27 no 12 pp 1977ndash1981 2005

[31] C G Turhan andH S Bilge ldquoClass-wise two-dimensional PCAmethod for face recognitionrdquo IET Computer Vision vol 11 no4 pp 286ndash300 2017

[32] A Mashhoori and M Zolghadri Jahromi ldquoBlock-wise two-directional 2DPCA with ensemble learning for face recogni-tionrdquo Neurocomputing vol 108 pp 111ndash117 2013

[33] M P Rajath Kumar R Keerthi Sravan and K M AishwaryaldquoArtificial neural networks for face recognition using PCA andBPNNrdquo in Proceedings of the 35th IEEE Region 10 Conference(TENCON rsquo15) pp 1ndash6 IEEE Macao China November 2015

[34] E J Candes and T Tao ldquoNear-optimal signal recovery fromrandom projections universal encoding strategiesrdquo Institute ofElectrical and Electronics Engineers Transactions on InformationTheory vol 52 no 12 pp 5406ndash5425 2006

Mathematical Problems in Engineering 13

[35] W U Bajwa J D Haupt G M Raz S J Wright and R DNowak ldquoToeplitz-structured compressed sensing matricesrdquo inProceedings of the IEEESP 14th Workshop on Statistical SignalProcessing (SSP rsquo07) pp 294ndash298 IEEE Madison WI USAAugust 2007

[36] C Zhang H-R Yang and S Wei ldquoCompressive sensing basedon deterministic sparse toeplitz measurement matrices withrandom pitchrdquo Acta Automatica Sinica vol 38 no 8 pp 1362ndash1369 2012

[37] E S Takemura M R Petraglia and A Petraglia ldquoAn imagesuper-resolution algorithm based on wiener filtering andwavelet transformrdquo inProceedings of the 2013 IEEEDigital SignalProcessing and Signal Processing Education Meeting (DSPSPErsquo13) pp 130ndash134 August 2013

[38] M Yang Face Recognition via Sparse Representation WileyEncyclopedia of Electrical and Electronics Engineering 2008

[39] E J Candes J Romberg and T Tao ldquoRobust uncertaintyprinciples exact signal reconstruction from highly incompletefrequency informationrdquo Institute of Electrical and ElectronicsEngineers Transactions on InformationTheory vol 52 no 2 pp489ndash509 2006

[40] A S Georghiades P N Belhumeur and D J Kriegman ldquoFromfew to many illumination cone models for face recognitionunder variable lighting and poserdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 23 no 6 pp 643ndash6602001

[41] N McLaughlin J Ming and D Crookes ldquoLargest matchingareas for illumination and occlusion robust face recognitionrdquoIEEE Transactions on Cybernetics vol 47 no 3 pp 796ndash8082017

[42] Y Xu X Fang J Wu X Li and D Zhang ldquoDiscriminativetransfer subspace learning via low-rank and sparse representa-tionrdquo IEEE Transactions on Image Processing vol 25 no 2 pp850ndash863 2016

[43] H Roy and D Bhattacharjee ldquoLocal-gravity-face (lg-face) forillumination-invariant and heterogeneous face recognitionrdquoIEEE Transactions on Information Forensics and Security vol 11no 7 pp 1412ndash1424 2016

[44] Y Tai J Yang Y Zhang L Luo J Qian and Y ChenldquoFace recognition with pose variations and misalignment viaorthogonal procrustes regressionrdquo IEEE Transactions on ImageProcessing A Publication of the IEEE Signal Processing Societyvol 25 no 6 pp 2673ndash2683 2016

[45] G B Huang M Narayana and E Learned-Miller ldquoLabeledfaces in the wild a database forstudying face recognition inunconstrained environmentsrdquoMonth 2008

[46] LWolf T Hassner and Y Taigman ldquoSimilarity scores based onbackground samplesrdquo in Asian Conference on Computer Visionvol 5995 pp 88ndash97 2009

[47] J Wang Geometric Structure of High-Dimensional Data andDimensionality Reduction Springer 2012

[48] Q Wang J Li and Y Shen ldquoA survey on deterministicmeasurement matrix construction algorithms in compressivesensingrdquo Acta Electronica Sinica vol 41 no 10 pp 2041ndash20502013

[49] WYin SMorgan J Yang andY Zhang ldquoPractical compressivesensing with toeplitz and circulant matricesrdquo in Proceedingsof the Visual Communications and Image Processing 2010 vol7744 July 2010

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 13: SUBMITTED TO MATHEMATICAL PROBLEMS IN …downloads.hindawi.com/journals/mpe/aip/3025264.pdf · SUBMITTED TO MATHEMATICAL PROBLEMS IN ENGINEERING 1 Patch based Collaborative Representation

Mathematical Problems in Engineering 13

[35] W U Bajwa J D Haupt G M Raz S J Wright and R DNowak ldquoToeplitz-structured compressed sensing matricesrdquo inProceedings of the IEEESP 14th Workshop on Statistical SignalProcessing (SSP rsquo07) pp 294ndash298 IEEE Madison WI USAAugust 2007

[36] C Zhang H-R Yang and S Wei ldquoCompressive sensing basedon deterministic sparse toeplitz measurement matrices withrandom pitchrdquo Acta Automatica Sinica vol 38 no 8 pp 1362ndash1369 2012

[37] E S Takemura M R Petraglia and A Petraglia ldquoAn imagesuper-resolution algorithm based on wiener filtering andwavelet transformrdquo inProceedings of the 2013 IEEEDigital SignalProcessing and Signal Processing Education Meeting (DSPSPErsquo13) pp 130ndash134 August 2013

[38] M Yang Face Recognition via Sparse Representation WileyEncyclopedia of Electrical and Electronics Engineering 2008

[39] E J Candes J Romberg and T Tao ldquoRobust uncertaintyprinciples exact signal reconstruction from highly incompletefrequency informationrdquo Institute of Electrical and ElectronicsEngineers Transactions on InformationTheory vol 52 no 2 pp489ndash509 2006

[40] A S Georghiades P N Belhumeur and D J Kriegman ldquoFromfew to many illumination cone models for face recognitionunder variable lighting and poserdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 23 no 6 pp 643ndash6602001

[41] N McLaughlin J Ming and D Crookes ldquoLargest matchingareas for illumination and occlusion robust face recognitionrdquoIEEE Transactions on Cybernetics vol 47 no 3 pp 796ndash8082017

[42] Y Xu X Fang J Wu X Li and D Zhang ldquoDiscriminativetransfer subspace learning via low-rank and sparse representa-tionrdquo IEEE Transactions on Image Processing vol 25 no 2 pp850ndash863 2016

[43] H Roy and D Bhattacharjee ldquoLocal-gravity-face (lg-face) forillumination-invariant and heterogeneous face recognitionrdquoIEEE Transactions on Information Forensics and Security vol 11no 7 pp 1412ndash1424 2016

[44] Y Tai J Yang Y Zhang L Luo J Qian and Y ChenldquoFace recognition with pose variations and misalignment viaorthogonal procrustes regressionrdquo IEEE Transactions on ImageProcessing A Publication of the IEEE Signal Processing Societyvol 25 no 6 pp 2673ndash2683 2016

[45] G B Huang M Narayana and E Learned-Miller ldquoLabeledfaces in the wild a database forstudying face recognition inunconstrained environmentsrdquoMonth 2008

[46] LWolf T Hassner and Y Taigman ldquoSimilarity scores based onbackground samplesrdquo in Asian Conference on Computer Visionvol 5995 pp 88ndash97 2009

[47] J Wang Geometric Structure of High-Dimensional Data andDimensionality Reduction Springer 2012

[48] Q Wang J Li and Y Shen ldquoA survey on deterministicmeasurement matrix construction algorithms in compressivesensingrdquo Acta Electronica Sinica vol 41 no 10 pp 2041ndash20502013

[49] WYin SMorgan J Yang andY Zhang ldquoPractical compressivesensing with toeplitz and circulant matricesrdquo in Proceedingsof the Visual Communications and Image Processing 2010 vol7744 July 2010

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom

Page 14: SUBMITTED TO MATHEMATICAL PROBLEMS IN …downloads.hindawi.com/journals/mpe/aip/3025264.pdf · SUBMITTED TO MATHEMATICAL PROBLEMS IN ENGINEERING 1 Patch based Collaborative Representation

Hindawiwwwhindawicom Volume 2018

MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Mathematical Problems in Engineering

Applied MathematicsJournal of

Hindawiwwwhindawicom Volume 2018

Probability and StatisticsHindawiwwwhindawicom Volume 2018

Journal of

Hindawiwwwhindawicom Volume 2018

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawiwwwhindawicom Volume 2018

OptimizationJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Engineering Mathematics

International Journal of

Hindawiwwwhindawicom Volume 2018

Operations ResearchAdvances in

Journal of

Hindawiwwwhindawicom Volume 2018

Function SpacesAbstract and Applied AnalysisHindawiwwwhindawicom Volume 2018

International Journal of Mathematics and Mathematical Sciences

Hindawiwwwhindawicom Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Hindawiwwwhindawicom Volume 2018Volume 2018

Numerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisNumerical AnalysisAdvances inAdvances in Discrete Dynamics in

Nature and SocietyHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Dierential EquationsInternational Journal of

Volume 2018

Hindawiwwwhindawicom Volume 2018

Decision SciencesAdvances in

Hindawiwwwhindawicom Volume 2018

AnalysisInternational Journal of

Hindawiwwwhindawicom Volume 2018

Stochastic AnalysisInternational Journal of

Submit your manuscripts atwwwhindawicom