modeling age progression in young faces - iiit hyderabadvandana/papers/time/ageprog2006.pdf ·...

8
Modeling Age Progression in Young Faces Narayanan Ramanathan 1 University of Maryland College Park [email protected] Rama Chellappa University of Maryland College Park [email protected] Abstract We propose a craniofacial growth model that charac- terizes growth related shape variations observed in hu- man faces during formative years. The model draws in- spiration from the ‘revised’ cardioidal strain transforma- tion model proposed in psychophysical studies related to craniofacial growth. The model takes into account anthro- pometric evidences collected on facial growth and hence is in accordance with the observed growth patterns in hu- man faces across years. We characterize facial growth by means of growth parameters defined over facial landmarks often used in anthropometric studies. We illustrate how the age-based anthropometric constraints on facial proportions translate into linear and non-linear constraints on facial growth parameters and propose methods to compute the op- timal growth parameters. The proposed craniofacial growth model can be used to predict one’s appearance across years and to perform face recognition across age progression. This is demonstrated on a database of age separated face images of individuals under 18 years of age. 1. Introduction Human faces comprise a special class of 3D objects that have long been of interest to computer vision and psy- chophysics communities. Apart from playing a crucial role in human identification, human faces convey a significant amount of information on one’s age, gender, ethnicity etc. In addition, facial expressions and facial gestures often re- veal the emotional state of an individual. Consequently, human facial analysis has received considerable attention and has led to the development of novel approaches to per- form face recognition, facial expression characterization, face modeling etc. [14]. Psychophysical studies on human perception showed that changes in one’s facial appearance often had a signif- icant psychosocial impact on the individual [2]. For in- 1 This work was partly supported by a fellowship from Apptis, Inc. stance, facial attractiveness was attributed to affecting in- terpersonal relationships. It was observed that the perceived age of an individual often regulated the type and amount of behavior directed towards the individual. Further, growth related changes to human faces were observed to directly impact facial aesthetics. Studies related to the perception of growing faces have largely been inspired by D’arcy Thomp- son’s study of morphogenesis [21]. Thompson pioneered the use of geometric transformations in the study of mor- phogenesis. Biological forms were embedded within co- ordinate systems and different morphogenetic events were described by means of global geometric transformations in the coordinate system. All through his study, he maintained that morphological changes were a result of the physical forces such as bio-mechanical stress and gravity that act on biological forms. Some of the initial studies related to cran- iofacial growth in humans were along similar lines. 1.1. Previous Work Pittenger and Shaw [19] studied facial growth as a viscal- elastic event defined on the craniofacial complex. They applied strain, shear and radial transformations on facial profiles and studied the relative significance of each of the transformation in accounting for the global remodeling of human faces with age. They observed that shape changes in facial profiles induced by cardioidal strain transforma- tions formed the primary source of perceptual information for relative age judgements. Further, Mark et al. [17] identi- fied geometric invariants that are characteristic of cardioidal strain transformations and proposed that only transforma- tions that preserve such geometric invariants in human faces would be perceived as growth-related transformations. By performing a hydrostatic analysis on the effects of internal forces (combination of biomechanical stress and gravita- tional forces) acting on a growing head, Todd et al. [23] pro- posed the ‘revised’ cardioidal strain transformation model to account for craniofacial growth. They treated the human head as a fluid filled spherical object that remodels in accor- dance with the direction and amount of pressure exerted on the surface. Mark et al. [16] extended the above model to Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06) 0-7695-2597-0/06 $20.00 © 2006 IEEE

Upload: trankhuong

Post on 19-Jun-2018

232 views

Category:

Documents


0 download

TRANSCRIPT

Modeling Age Progression in Young Faces

Narayanan Ramanathan1

University of MarylandCollege Park

[email protected]

Rama ChellappaUniversity of Maryland

College [email protected]

Abstract

We propose a craniofacial growth model that charac-terizes growth related shape variations observed in hu-man faces during formative years. The model draws in-spiration from the ‘revised’ cardioidal strain transforma-tion model proposed in psychophysical studies related tocraniofacial growth. The model takes into account anthro-pometric evidences collected on facial growth and henceis in accordance with the observed growth patterns in hu-man faces across years. We characterize facial growth bymeans of growth parameters defined over facial landmarksoften used in anthropometric studies. We illustrate how theage-based anthropometric constraints on facial proportionstranslate into linear and non-linear constraints on facialgrowth parameters and propose methods to compute the op-timal growth parameters. The proposed craniofacial growthmodel can be used to predict one’s appearance across yearsand to perform face recognition across age progression.This is demonstrated on a database of age separated faceimages of individuals under 18 years of age.

1. Introduction

Human faces comprise a special class of 3D objects thathave long been of interest to computer vision and psy-chophysics communities. Apart from playing a crucial rolein human identification, human faces convey a significantamount of information on one’s age, gender, ethnicity etc.In addition, facial expressions and facial gestures often re-veal the emotional state of an individual. Consequently,human facial analysis has received considerable attentionand has led to the development of novel approaches to per-form face recognition, facial expression characterization,face modeling etc. [14].

Psychophysical studies on human perception showedthat changes in one’s facial appearance often had a signif-icant psychosocial impact on the individual [2]. For in-

1This work was partly supported by a fellowship from Apptis, Inc.

stance, facial attractiveness was attributed to affecting in-terpersonal relationships. It was observed that the perceivedage of an individual often regulated the type and amount ofbehavior directed towards the individual. Further, growthrelated changes to human faces were observed to directlyimpact facial aesthetics. Studies related to the perception ofgrowing faces have largely been inspired by D’arcy Thomp-son’s study of morphogenesis [21]. Thompson pioneeredthe use of geometric transformations in the study of mor-phogenesis. Biological forms were embedded within co-ordinate systems and different morphogenetic events weredescribed by means of global geometric transformations inthe coordinate system. All through his study, he maintainedthat morphological changes were a result of the physicalforces such as bio-mechanical stress and gravity that act onbiological forms. Some of the initial studies related to cran-iofacial growth in humans were along similar lines.

1.1. Previous Work

Pittenger and Shaw [19] studied facial growth as a viscal-elastic event defined on the craniofacial complex. Theyapplied strain, shear and radial transformations on facialprofiles and studied the relative significance of each of thetransformation in accounting for the global remodeling ofhuman faces with age. They observed that shape changesin facial profiles induced by cardioidal strain transforma-tions formed the primary source of perceptual informationfor relative age judgements. Further, Mark et al. [17] identi-fied geometric invariants that are characteristic of cardioidalstrain transformations and proposed that only transforma-tions that preserve such geometric invariants in human faceswould be perceived as growth-related transformations. Byperforming a hydrostatic analysis on the effects of internalforces (combination of biomechanical stress and gravita-tional forces) acting on a growing head, Todd et al. [23] pro-posed the ‘revised’ cardioidal strain transformation modelto account for craniofacial growth. They treated the humanhead as a fluid filled spherical object that remodels in accor-dance with the direction and amount of pressure exerted onthe surface. Mark et al. [16] extended the above model to

Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06) 0-7695-2597-0/06 $20.00 © 2006 IEEE

three dimensions and simulated facial growth on 3D headscans of children.

In computer vision literature, age progression in humanfaces has been addressed from two perspectives: one to-wards automatic age estimation and age-based classificationfrom face images and the other towards automatic age pro-gression systems that could reliably predict one’s appear-ance across age. Kwon and da Vitoria lobo [11] proposedmethods to classify face images as that of babies, youngadults and senior adults. They used face anthropometrybased approaches to classify face images into images of ba-bies and that of adults and proposed methods to analyze fa-cial wrinkles to further classify adult faces as that of youngadults and senior adults. Burt and Perrett [5] created com-posite faces for different age groups by computing the av-erage shape and texture of human faces that belong to eachage group. By incorporating the differences between suchcomposite faces on regular faces, they observed a changein the perceived age of faces. Tiddeman et al. [22] furtherextended their work by using wavelet methods to prototypefacial textures across age. Lanitis et al. [13] constructedan aging function based on a parametric model for humanfaces and performed automatic age progression, age estima-tion, face recognition across age etc. Further, Lanitis et al.[12] compared the above age estimation process with that ofneural networks based approaches. Gandhi [9] designed asupport vector machine based age estimation technique andextended the image based surface detail transfer approach tosimulate aging effects on faces. Ramanathan and Chellappa[20] proposed a Bayesian age-difference classifier built ona probabilistic eigenspaces framework to perform face ver-ification across age progression.

1.2. Motivation and Problem Statement

Some of the significant applications of studying age pro-gression in human faces are face recognition across age(homeland security), automatic age estimation (parentalcontrol, age based Human-Computer interaction), predic-tion of one’s appearance across age (finding missing indi-viduals) etc. Developing models that characterize age pro-gression in faces is a very challenging task. Facial ag-ing effects are predominantly manifested in the form ofshape variations during one’s younger years and as wrin-kles and other textural variations during one’s older years[15]. Though the aforementioned approaches propose novelmethods to address age progression in faces, in their for-mulation most approaches ignore the psychophysical ev-idences collected on age progression. Face anthropomet-ric studies offer a good insight into craniofacial growth andhence have long been used by physicians in treating cran-iofacial disorders. Face recognition systems that were de-veloped with the incorporation of such data would be betterequipped in handling age progression in faces.

We propose a craniofacial growth model that character-izes the shape variations undergone by human faces duringformative years. The craniofacial growth model draws in-spiration from the ‘revised’ cardioidal strain transformationmodel proposed by Todd et al. [23] and further, accounts forthe age based anthropometric constraints on human facesprovided by Farkas [7]. The proposed craniofacial growthmodel can be used to predict one’s appearance across ageand to perform face recognition across age progression onindividuals in the age range (0 yrs - 18 yrs). We performexperiments to demonstrate the same on a database of ageseparated face images of individuals in the age range (0 yrs- 18 yrs).

Section II gives an overview of the craniofacial growthmodel and highlights the importance of incorporating age-based anthropometric face measurements in developing themodel. Section III discusses face anthropometry in rele-vance to studying age progression in human faces. SectionIV provides a mathematical framework to the computationof the craniofacial growth model. Sections V discusses theexperiments that were performed on age separated face im-ages of individuals in the age range (0 - 18 years) usingour model and section VI discusses the strengths and limi-tations of the proposed craniofacial growth model and offersinsights into future work on this topic.

2. Craniofacial Growth Model

Studies related to craniofacial growth were largely basedon the hypothesis that any recognizable style of change isuniquely specified by geometric invariants which form thebasis for perceptual information. Mark et al. [17] iden-tified three geometric invariants that are characteristic tocardioidal strain transformations. The geometric invariantscan be described as follows : (i) angular coordinates of ev-ery point on an object in a polar coordinate system beingpreserved (ii) bilateral symmetry about the vertical axis be-ing maintained (iii) continuity of object contours being pre-served. Re-examining the analogy drawn between cranio-facial growth and the remodeling of a fluid filled spheri-cal object as proposed by Todd et al. [23], one can identifythe following aspects that help preserve the aforementionedgeometric invariants upon transformations : (i) pressure isdirected radially outwards (ii) pressure distribution is bilat-erally symmetric about the vertical axis (iii) pressure dis-tribution is continuous throughout the object. Hence theproposed model satisfies the criteria based on geometric in-variants observed in craniofacial growth. Fig. 1(a) illus-trates the pressure distribution inside a fluid-filled sphericalobject. (A similar illustration appears in [2]).

Mathematically, the ‘revised’ cardioidal strain transfor-mation model is expressed as follows [23]. Let P denotethe pressure at the particular point on the object surface act-ing radially outward. Let (R0, θ0) and (R1, θ1) denote the

Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06) 0-7695-2597-0/06 $20.00 © 2006 IEEE

Figure 1. (a) Remodeling of a fluid filled spherical object (b) Facialgrowth simulated on the profile of a child’s face using the ‘revised’cardioidal strain transformations

angular co-ordinates of a point on the surface of the object,before and after the transformation. Let k denote a growthrelated constant.

P ∝ R0(1 − cos(θ0))R1 = R0 + k(R0 − R0 cos(θ0)) (1)

θ1 = θ0

We applied the ‘revised’ cardioidal strain transformationon the face profile of a child. Upon varying the growthparameter k, we observed that the transformation of faceprofiles closely resembled the one observed in actual facialgrowth. Further, the perceived age of each of the individualface profiles increased with increase in the growth parame-ter k. Fig. 1(b) illustrates the face profiles obtained by em-ploying the above transformation. Next, we employed theage transformation model on frontal face images of childrenand observed that age transformed faces resembled real-lifeimages taken across years better, for small age differences.For large values of k, which essentially implies larger agetransformations, the aspect ratios between different regionsof the transformed faces were less preserved. Fig. 2 illus-trates the face images obtained by applying the ‘revised’cardioidal strain transformation model. In each of the twoinstances illustrated in fig. 2, we observe that while the agetransformation is perceivable in the initial few transforma-tions, the aspect ratio of faces obtained for large age trans-formations seem unnatural.

Face anthropometric studies report that different facialregions reach maturation at different years and hence afew facial features change relatively less when comparedto other facial features, as age increases. In relevance tothe ‘revised’ cardioidal strain transformation model, thisobservation translates into the fact that different regionsof human faces have different growth parameters across

Figure 2. Age transformation results obtained by applying the ‘re-vised’ cardioidal strain transformation on real-life face images oftwo individuals (8 years and 13 years of age respectively). Thegrowth parameters chosen for each of the 8 instances were (i)k = 0.06 (ii) k = 0.12 (iii) k = 0.18 (iv) k = 0.21 (v) k = 0.06(vi) k = 0.12 (vii) k = 0.18 (viii) k = 0.27. The original imagesbelong to the FG-Net aging database [1].

age. Hence, it is important to incorporate anthropometricevidences collected on facial growth while developing themodel, whereby we can reliably estimate the growth param-eters for different regions of the human face across age.

3. Face Anthropometry

Face anthropometry is the science of measuring sizesand proportions on human faces. Face anthropometricstudies provide a quantitative description of the craniofa-cial complex by means of measurements taken betweenkey landmarks on human faces across age and are oftenused in characterizing normal and abnormal facial growth.Farkas [7] provides a comprehensive overview of face an-thropometry and its many significant applications. He de-fines face anthropometry in terms of measurements takenfrom 57 landmarks on human faces. The measurementstaken on human faces are of three kinds : (i) projectivemeasurements (shortest distance between two landmarks)(ii) tangential measurements (distance between two land-marks measured along the skin surface) (iii) angular mea-surements. By comparing real anthropometric measure-ments with that obtained through photogrammetry of hu-man faces, Farkas identifies facial landmarks that can be re-liably estimated from standard photographs of human faces(20%,25%,33%,50% or life size) for photogrammetric ap-plications.

We use the age-based facial measurements and propor-tion indices (ratios of distances between facial landmarks)provided in [7] and [8] to build the craniofacial growthmodel. Since our study primarily involves frontal face im-ages of individuals across age, we use only those faciallandmarks that could be reliably located using photogram-

Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06) 0-7695-2597-0/06 $20.00 © 2006 IEEE

Figure 3. Face Anthropometry : Of the 57 facial landmarks definedin [7], we choose 24 landmarks illustrated above for our study. Wefurther illustrate some of the key facial measurements that wereused to develop the growth model.

metry. Further, we take into account only linear projectivemeasurements taken on human faces across age, since an-gular measurements and tangential measurements on facescannot be estimated accurately using photogrammetry offrontal face images. Fig. 3 illustrates the 24 facial land-marks and some of the important facial measurements thatwere used in our study.

Facial growth is often studied using proportion indicesand hence we interpret facial measurements in terms of pro-portion indices while developing the craniofacial growthmodel. Some of the proportion indices that were used inour study were (i) Facial index ( n−gn

zy−zy ), (ii) Mandibular in-

dex ( sto−gngo−go ), (iii) Intercanthal index ( en−en

ex−ex ), (iv) Orbital

width index ( ex−enen−en ), (v) Eye fissure index ( ps−pi

ex−en ), (vi)

Nasal index (al−aln−sn ), (vii) Vermilion height index ( ls−sto

sto−li ),(viii) Mouth-Face width index ( ch−ch

zy−zy ) etc. Table 1 com-piled from various anthropometric studies from [7] illus-trates the different growth patterns observed in differentparts of faces in men and women. RTI (Relative Total Incre-ment) is defined as l18−l1

l1×100 where l1 and l18 correspond

to mean value of the measurements obtained at ages 1 and18 respectively. The age of maturation and the period ofgrowth spurt are estimated by analyzing relative changes infacial measurements across years. Face anthropometry hasbeen successfully used in computer graphics applicationsby DeCarlo et al. [6] in developing geometric models forhuman faces and by Kahler [10] in simulating growth onhuman head models.

Table 1. Growth pattern in different facial regionsFeature RTI Growth Maturation

(%) Spurt (yrs) age (yrs)M F M F M F

n-gn 50.5 44.8 1-4 1-5 15 13zy-zy 38.7 35.9 3-4 3-4 15 13en-en 20.5 17.5 3-4 3-4 11 8al-al 30.9 21.2 3-4 3-4 14 12n-sn 71.5 67.5 1-2 3-4 15 12

4. Computational Aspects

This section details the computational aspects involvedin using face anthropometric data to develop the proposedcraniofacial growth model that concurs with psychophysicalstudies related to the growth of human faces.

4.1. Feature localization

Of the 57 landmarks identified by face anthropometricstudies, we chose 24 landmarks that can be reliably locatedon frontal faces as illustrated in fig. 3. To automaticallydetect facial landmarks on frontal faces, we use the face de-tection and feature localization method proposed by Moonet al. [18]. We detect facial features such as the eyes, themouth and the outer contour of the face by fitting ellipses ofdifferent sizes and orientations. This operation enables thelocation of the following facial landmarks (tr, gn - fore-head and chin), (en, ex, ps, pi - eyes) and (ch, sto, ls, li- mouth). Using proportion indices obtained through dis-tances between the detected landmarks such as the inter-canthal index ( en−en

ex−ex ), the biocular width-total face heightindex ( ex−ex

tr−gn ), the intercanthal-mouth width index ( en−ench−ch )

etc. we estimate the age group to which the face imagebelongs to and detect other facial landmarks using anthro-pometric data available on faces belonging to the particularage group. Minor errors in feature localization, do not affectthe proposed method to compute facial growth parametersmuch.

Under the craniofacial growth model defined in eq. 1, weobserve that facial features with angular coordinates θ = 0remain static and that features with angular co-ordinates θsuch that |θ| ≤ ε where ε is a small number, grow min-imally. Hence reliably estimating the origin of referencefor the ‘revised’ cardioidal strain transformation model iscrucial to the success of this model in characterizing facialgrowth. RTI defined in the previous section is a quantita-tive measure on the extent of growth observed in facial fea-tures across age. The RTI of the forehead length tr − n isobserved to be a low 11.8% in men and 2.25% in womenwhen compared to that of other facial measurements. If theorigin of reference is located between tr and n along the

Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06) 0-7695-2597-0/06 $20.00 © 2006 IEEE

Figure 4. Prototype faces created for different ages using anthropometric measurements from [7] are illustrated. The flow of facial featuresacross age illustrated separately, validates the constraints imposed by the craniofacial growth model proposed in eq. 1.

facial mid-line axis, one could appreciate the effectivenessof the radial and angular constraints defined by the cranio-facial growth model in eq. 1 in characterizing facial growth.Fig. 4, further strengthens this notion. Prototype faces fordifferent ages were created using the mean value of mea-surements taken across different facial landmarks. Fig. 4illustrates the prototype faces for ages (in years) 2, 5, 8, 12,15, 18 and illustrates the flow of facial features across agewhich is used in determining the optimal origin of referencefor the craniofacial growth model.

4.2. Model computation : An Optimization problem

Let the facial growth parameters of the ‘revised’ car-dioidal strain transformation model, that correspond to fa-cial landmarks designated by [n, sn, ls, sto, li, sl, gn, en,ex, ps, pi, zy, al, ch, go] be [k1, k2, · · · k15] respectively.The facial growth parameters for different age transforma-tions can be computed using anthropometric constraints onfacial proportions. The computation of facial growth pa-rameters is formulated as a non-linear optimization prob-lem. We identified 52 facial proportions that can be re-liably estimated using the photogrammetry of frontal faceimages. Anthropometric constraints based on proportionindices translate into linear and non-linear constraints onselected facial growth parameters. While constraints basedon proportion indices such as the intercanthal index, nasalindex etc. result in linear constraints on the growth param-eters, constraints based on proportion indices such as eyefissure index, orbital width index etc. result in non-linearconstraints on the growth parameters.

Let the constraints derived using proportion indices bedenoted as r1(k) = β1, r2(k) = β2, · · · , rN (k) = βN . Theobjective function f(k) that needs to be minimized w.r.t kis defined as

f(k) =12

N∑i=1

(ri(k) − βi)2 (2)

The following equations illustrate the constraints that werederived using different facial proportion indices. (αi

j and

r1 :[

n−gnzy−zy = c1

] ≡ α(1)1 k1 + α

(1)2 k7 + α

(1)3 k12 = β1

r2 :[

al−alch−ch = c2

] ≡ α(2)1 k13 + α

(2)2 k14 = β2

r3 :[

li−slsto−sl = c3

] ≡ α(3)1 k4 + α

(3)2 k5 + α

(3)3 k6 = β3

r4 :[

sto−gngn−zy = c4

] ≡ α(4)1 k5 + α

(4)2 k7 + α

(4)3 k12 + α

(4)4 k2

4

+ α(4)5 k2

7 + α(4)6 k2

12 + α(4)7 k4 k7 + α

(4)8 k7 k12 = β4

βi are constants. ci is age-based proportion index obtainedfrom [7].)

We use the Levenberg-Marquardt non-linear optimiza-tion algorithm [3] to compute the growth parameters thatminimize the objective function in an iterative fashion. Weuse the craniofacial growth model defined in Eq. 1 to com-pute the initial estimate of the facial growth parameters.The initial estimates are obtained using the age-based facialmeasurements provided for each facial landmark, individu-ally. The iterative step involved in the optimization processis defined as

ki+1 = ki − (H + λdiag[H])−1∇f(ki) (3)

where ∇f(ki) =∑N

i=1 ri(k)∇ri(k) and H correspondsto the Hessian matrix evaluated at ki. At the end of eachiteration, λ is updated as illustrated in [3].

Next, using the growth parameters computed over se-lected facial landmarks, we compute the growth parametersover the entire face region. This is formulated as a scattereddata interpolation problem [4]. On a cartesian coordinatesystem defined over the face region, the growth parametersk = [k1, k2, · · · , kn] correspond to parameters obtained atfacial landmarks located in (x1, y1), (x2, y2), · · · , (xn, yn).Our objective is to find an interpolating function f : R2 →R such that

f(xi) = ki i = 1, . . . , n (4)

Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06) 0-7695-2597-0/06 $20.00 © 2006 IEEE

where xi = (xi, yi) and the thin-plate energy functional Edefined as

E =∫ ∫

Ω

f2xx(x) + 2f2

xy(x) + f2xx(x)dx (5)

is minimized. E is a measure of the amount of bending inthe surface. In eq. 5, Ω is the region of interest (face region,in our case). Using the method of radial basis function, theinterpolating function that minimizes the energy functionalcan be shown to take the form

f(x) = p(x) +n∑

i=1

λiφ(|x − xi|) (6)

where p(x) is a linear polynomial, λi are real numbers and|.| is the Euclidean norm in R2. The linear polynomialp(x) accounts for affine deformations in the system. Weadopt the thin plate splines functions defined as φ(x) =|x|2 log(|x|) to comprise the basis functions. As illus-trated in [4], to remove affine contributions from the basisfunctions, we introduce additional constraints

∑ni=1 λi =∑n

i=1 λixi =∑n

i=1 λiyi = 0. Eqs. 4 and 6 coupled withthe constraints above, results in the following linear systemof equations solving which we compute the interpolatingfunction f. The linear system of equations is(

A PPT 0

) (λc

)=

(k0

)(7)

where A is a matrix with entries Ai,j = φ(|xi − xj |)i, j = 1, . . . , n, P is a matrix with rows (1, xi, yi), λ =(λ1, · · · , λn) and c = (c1, c2, c3) the coefficients of thepolynomial function. Thus the growth parameters com-puted at selected facial features using face anthropometryis used to compute the growth parameters over the entirefacial region. Upon computing the growth parameters, theproposed craniofacial growth model can be applied to au-tomatically age the face image. For an age transformationfrom ‘p’ years to ‘q’ years (q > p), the model takes the formsimilar to the one defined in eq. 1. On a polar coordinateframework, the transformation is defined as

Riq = Ri

p(1 + kipq(1 − cos(θi

p)))

θiq = θi

p

where i corresponds to the i’th facial feature and kpq , thegrowth parameters for a transformation from ‘p’ years to‘q’ years. The model can also be used to lower the age offace images. For an age transformation from ‘p’ years to ‘q’years (p > q), we define the following inverse transforma-tion.

Riq =

Rip

1 + kiqp(1 − cos(θi

q))

θiq = θi

p

5. Experimental Results

Here, we describe two experiments that were conductedon a database of age separated face images of individualsunder 18 years of age. The proposed craniofacial growthmodel was used to predict one’s appearance across age andto perform face recognition across age progression. Thedatabase comprises of a set of images from the FG-Net ag-ing database[1] and a set of age separated face images thatwe collected for our research. Our database comprises atotal of 233 images of 109 individuals.

Fig. 5 shows some of the age transformation results ob-tained using the proposed model. In fig. 5 we illustrate theoriginal and the age transformed face images along with thefacial growth parameters for different age transformationson different individuals. The facial growth parameters helpobserve the different growth patterns on individuals acrossage. For instance, one can observe that while facial featuresalong the outer contour of the face grow rapidly in the ini-tial few years, they grow relatively lesser when compared toother facial features, beyond 15 years of age.

Next, for the face recognition experiment, we create thegallery and the probe sets from the database such that thegallery set comprises of a single image per individual andthe probe set comprises of either single or multiple imagesper individual. One of the challenges involved in recogniz-ing face images of children across age progression is to ac-count for growth related shape variations in children’s faces.Most face recognition systems process face images as 2Dvectors of fixed dimensions M × N . Such a constraint ifimposed on pairs of age separated face image of children,would result in the face recognition system comparing pairsof face images with misaligned facial features that werescaled without due considerations on facial growth. Hence,to perform recognition across age separated face images ofchildren, a better approach to handle the differences in thesize of faces would be to employ an age transformation suchthat pairs of images that are to be compared are transformedto the same ages and hence presumably have comparablesizes.

To highlight the importance of performing an age trans-formation on pairs of age separated face images before per-forming face recognition across age progression, we de-signed the following experiment. Let the images compris-ing the gallery set and probe set be designated as G :{Gx1

1 , Gx22 , · · · , Gxm

m } and P : {P y11 , P y2

2 , · · · , P ynn } re-

spectively. Let (x1, · · · , xm, y1, · · · , yn) correspond to thedifferent ages of individuals present in the gallery and theprobe sets. Under the first setting, the images in the galleryand the probe sets are scaled and cropped such that they areall of dimensions M × N and the eyes are aligned acrossall images. Under the second setting, given a probe im-age P yi

i , we perform an age transformation operation on allthe gallery images and generate the gallery images for age

Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06) 0-7695-2597-0/06 $20.00 © 2006 IEEE

Figure 5. Age transformation results on different individual. (The original images shown above were taken from the FG-Net database [1].)

yi years. Such an operation is repeated for all the probeimages P yi

i , i = 1 . . . n. Fig. 6 illustrates the gallery im-ages that were generated for different ages from the imagesof two subjects in the original gallery. Using eigenfaces[24], we perform face recognition under both the settings.The recognition results tabulated in Table 2 show that bet-ter recognition rates can be achieved when age transforma-tion operations are performed on pairs of age separated faceimages of individuals before performing recognition. Therecognition results reported in 2 were obtained without ac-counting for differences in illumination, head pose, facialexperssions etc. between pairs of age separated face im-ages. Hence, the rank 1 recognition scores are low.

Table 2. Recognition results (%) before and after age transforma-tion

Approach Rank 1 Rank 5 Rank 10No transformation 8 28 44Age transformed 15 37 58

6. Discussions and Conclusions

We have proposed a craniofacial growth model that takesinto account both psychophysical evidences on how humansperceive age progression in faces and anthropometric evi-dences on facial growth. We have demonstrated the effec-tiveness of the proposed model in predicting one’s appear-ance across age and in improving recognition results across

age separated face images of individuals. To discuss someof the strengths and weaknesses of the proposed method :

• The craniofacial growth model that we propose isunique for each individual. Though growth patternsare often observed in humans across age, there mightbe subtle differences in facial growth between differ-ent individuals. Under the same age transformation,though the facial growth parameters computed at faciallandmarks are identical across individuals, the facialgrowth parameters computed over the entire face re-gion is adapted to each individual differently and henceis different for different individuals.

• The model accounts for gender based differences infacial growth as it was developed using anthropometricdata pertaining to men and women, separately.

• Further, the craniofacial growth model can be adaptedto characterize facial growth on people from differentorigins by using anthropometric data pertaining to peo-ple from those origins. In this work, the anthropo-metric data used to develop the model was obtainedfrom facial measurements taken on Caucasian facesand hence, we expect our model to work better on Cau-casian faces.

• But, the proposed approach lacks a textural model anddoes not account for textural variations across age.Hence facial hair and other commonly observed textu-

Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06) 0-7695-2597-0/06 $20.00 © 2006 IEEE

Figure 6. Generated gallery images at different ages for two subjects from the original gallery set

ral variations in teenagers are not accounted for. Fur-ther, it does not account for changes in the amount offat tissue in the face. The model retains ‘baby fat’ andhence the age transformation results obtained on tod-dlers were poor.

In future, we wish to take a closer look at textural varia-tions in human faces with age (both during formative yearsand during old age).

References

[1] Face and Gesture Recognition Network :FG-NET aging database [online] Available :(http://sting.cycollege.ac.cy/ alanitis/fgnetaging/).

[2] R. Alley. Social and Applied Aspects of Perceiving Faces.NJ: Lawrence Erlbaum Associates, Inc, 1998.

[3] D. M. Bates and D. G. Watts. Nonlinear Regression and itsApplications. Wiley, New York, 1988.

[4] F. L. Bookstein. Principal warps: Thin-plate splines andthe decomposition of deformations. IEEE Transactions onPattern Analysis and Machine Intelligence, 11(6):567–585,June 1989.

[5] M. Burt and D. I. Perrett. Perception of age in adult cau-casian male faces: computer graphic manipulation of shapeand colour information. Journal of Royal Society, 259:137–143, Febraury 1995.

[6] D. DeCarlo, D. Metaxas, and M. Stone. An anthropometricface model using variational techniques. Proceedings SIG-GRAPH, pages 67–74, 1998.

[7] L. G. Farkas. Anthropometry of the Head and Face. RavenPress, New York, 1994.

[8] L. G. Farkas and I. R. Munro. Anthropometric Facial Pro-portions in Medicine. Charles C Thomas, Springfield, Illi-nois, USA, 1987.

[9] M. Gandhi. A method for automatic synthesis of aged humanfacial images. Master’s thesis, McGill University, September2004.

[10] K. Kahler. A head model with anatomical structure for facialmodeling and animation. Master’s thesis, der Universitat desSaarlendes, 2003.

[11] Y. H. Kwon and N. da Vitoria Lobo. Age classification fromfacial images. Computer Vision and Image Understanding,74:1–21, April 1999.

[12] A. Lanitis, C. Draganova, and C. Christodoulou. Compar-ing different classifiers for automatic age estimation. IEEETransactions on Systems, Man and Cybernetics - Part B,34(1):621–628, February 2004.

[13] A. Lanitis, C. J. Taylor, and T. F. Cootes. Toward auto-matic simulation of aging effects on face images. IEEETransactions on Pattern Analysis and Machine Intelligence,24(4):442–455, April 2002.

[14] S. Z. Li and A. K. Jain. Handbook of Face Recognition.Springer, New York.

[15] L. S. Mark, J. B. Pittenger, H. Hines, C. Carello, R. E. Shaw,and J. T. Todd. Wrinkling and head shape as coordinatedsources of age level information. Journal Perception andPsychophysics, 27(2):117–124, 1980.

[16] L. S. Mark and J. T. Todd. The perception of growth inthree dimensions. Journal of Perception and Psychophysics,33(2):193–196, 1983.

[17] L. S. Mark, J. T. Todd, and R. E. Shaw. Perception of growth: A geometric analysis of how different styles of change aredistinguised. Journal of Experimental Psychology : HumanPerception and Performance, 7:855–868, 1981.

[18] H. Moon, R. Chellappa, and A. Rozenfeld. Optimal edge-based shape detection. IEEE transactions on Image Process-ing, 11(11):1209–1226, 2002.

[19] J. B. Pittenger and R. E. Shaw. Aging faces as viscal-elasticevents : Implications for a theory of nonrigid shape percep-tion. Journal of Experimental Psychology : Human Percep-tion and Performance, 1(4):374–382, 1975.

[20] N. Ramanathan and R. Chellappa. Face verification acrossage progression. In IEEE Conference on Computer Visionand Pattern Recognition, volume 2, pages 462–469, SanDiego, U.S.A, 2005.

[21] D. W. Thompson. On Growth and Form. Dover Publications,1992 (original publication - 1917).

[22] B. Tiddeman, D. M. Burt, and D. Perret. Prototyping andtransforming facial texture for perception research. Com-puter Graphics and Applications, IEEE, 21(5):42–50, July-August 2001.

[23] J. T. Todd, L. S. Mark, R. E. Shaw, and J. B. Pittenger.The perception of human growth. Scientific American,242(2):132–144, 1980.

[24] M. Turk and A. Pentland. Eigenfaces for recognition. Jour-nal of Cognitive Neuroscience, 3:72–86, 1991.

Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06) 0-7695-2597-0/06 $20.00 © 2006 IEEE