emotion recognition from facial action points by principal ...representing the most prominent...

6
Emotion Recognition from Facial Action Points by Principal Component Analysis Anisha Halder, Garima Singh Arindam Jati, Amit Konar ETCE Department, Jadavpur University, Kolkata-32. India [email protected], [email protected], [email protected], [email protected] Aruna Chakraborty Department of Computer Science and Engineering, St. Thomas’ College of Engineering and Technology, Kolkata, India [email protected] Atulya K. Nagar Department of Math and Computer Science, Liverpool Hope University, Liverpool, UK. [email protected] Abstract— This paper proposes a novel approach to emotion recognition of a subject employing 36 selected facial action points marked at specific locations on their faces. Facial expressions obtained from the subjects enacting them are recorded, and the corresponding changes in marked action points are evaluated. The measurements reveal that the action points have wider variations in facial expressions containing diverse and intensified emotions. Considering 10 instances for each facial expression, and analysing the same emotion, experimented over 10 subjects, we obtain a set of 100 distance matrices, representing the distance between any two selected action points. The average of 100 matrices for each individual emotion is evaluated, and the first principal component, representing the most prominent features of the average distance matrix is evaluated. In the recognition phase, the first Principal component obtained from the distance matrix of an unknown facial expression is evaluated, and its Euclidean distance with the first Principal component of each emotion is determined. The Euclidean distance between the obtained principal component and that of j-th emotion class is minimum if the obtained emotion accurately falls in the emotion class. Classification of 120 facial images, with equal number of samples for six emotion classes, reveals an average classification accuracy of 92.5%, the highest being recorded in relax and disgust and the least in fear and anger. Keywords- Action points, Emotion Recognition, Principal Component Analysis (PCA). I. INTRODUCTION Due to its increasing scope of application in human- computer interfaces, ‘Emotion recognition’, forms an inevitable part of artificial intelligence . Several modalities of emotion recognition, including facial expression, voice, gesture and posture have been studied in the literature. However, irrespective of the modality, emotion recognition comprises of two fundamental steps involving feature extraction and classification [1]. Feature extraction refers to the determination of a set of features/attributes, preferably independent, which together represent a given emotional expression. Classification aims at mapping emotional features into one of the several emotion classes. Among the well-known methods of determining human emotions, Fourier descriptor [2], template matching [3], and neural network techniques [4], [5] deserve special mention. Other important works undertaken so far for recognition of emotions through facial expressions by selecting suitable features include [2], [6], [7], [8], [9], [10], [11], and by identifying the right classifier include [3], [4], [5], [10], [12], [13], [14], [15], [16], [17]. Human facial expressions as well as facial movements change for different emotions. The significance of facial movements can be understood from the biological point of view. Human face comprises of several muscles that are responsible for the movement of every part of the face. For instance, facial attributes, such as mouth- and eye-opening, commonly used for emotion recognition in the literature, have good correlations with specific muscles in the face. Naturally, detection of motion of the muscles carries more fundamental information than their aggregation, such as smiling or raising of eyebrows. The paper aims at detecting motion of the muscles involved in smiling, raising of eyebrows, partial and total opening of the eyes, constriction of eyebrows and temporal wrinkles formed due to muscle movements in the face while experiencing a specific emotion. The first research on emotion recognition from facial action points/units is by Ekman (1975) [11], followed by Ekman and Frisen [21]. In [22], the authors considered 46 basic facial action points to represent the movements of cheek, chin and wrinkles. Unfortunately, since its foundation, the action point features did not receive much attention, as most of the researchers emphasized on high level features comprising joint movement of several muscles rather than the primitive action points. Recently, researchers are of the views that high level features are more person- specific, and have wider variations because of cultural traits and are not always free from gender and region–bias. The action points being more fundamental and localized, offer culture-, race- and gender-independent features, containing sufficient information for automatic recognition of emotions.

Upload: others

Post on 10-Mar-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Emotion Recognition from Facial Action Points by Principal ...representing the most prominent features of the average distance matrix is evaluated. In the recognition phase, the first

Emotion Recognition from Facial Action Points by Principal Component Analysis

Anisha Halder, Garima Singh Arindam Jati, Amit Konar

ETCE Department, Jadavpur University, Kolkata-32. India

[email protected], [email protected],

[email protected], [email protected]

Aruna Chakraborty Department of Computer Science

and Engineering, St. Thomas’ College of Engineering and Technology, Kolkata, India [email protected]

Atulya K. Nagar Department of Math and Computer

Science, Liverpool Hope University, Liverpool, UK.

[email protected]

Abstract— This paper proposes a novel approach to emotion recognition of a subject employing 36 selected facial action points marked at specific locations on their faces. Facial expressions obtained from the subjects enacting them are recorded, and the corresponding changes in marked action points are evaluated. The measurements reveal that the action points have wider variations in facial expressions containing diverse and intensified emotions. Considering 10 instances for each facial expression, and analysing the same emotion, experimented over 10 subjects, we obtain a set of 100 distance matrices, representing the distance between any two selected action points. The average of 100 matrices for each individual emotion is evaluated, and the first principal component, representing the most prominent features of the average distance matrix is evaluated. In the recognition phase, the first Principal component obtained from the distance matrix of an unknown facial expression is evaluated, and its Euclidean distance with the first Principal component of each emotion is determined. The Euclidean distance between the obtained principal component and that of j-th emotion class is minimum if the obtained emotion accurately falls in the emotion class. Classification of 120 facial images, with equal number of samples for six emotion classes, reveals an average classification accuracy of 92.5%, the highest being recorded in relax and disgust and the least in fear and anger.

Keywords- Action points, Emotion Recognition, Principal Component Analysis (PCA).

I. INTRODUCTION Due to its increasing scope of application in human-

computer interfaces, ‘Emotion recognition’, forms an inevitable part of artificial intelligence . Several modalities of emotion recognition, including facial expression, voice, gesture and posture have been studied in the literature. However, irrespective of the modality, emotion recognition comprises of two fundamental steps involving feature extraction and classification [1]. Feature extraction refers to the determination of a set of features/attributes, preferably independent, which together represent a given emotional expression. Classification aims at mapping emotional

features into one of the several emotion classes. Among the well-known methods of determining human emotions, Fourier descriptor [2], template matching [3], and neural network techniques [4], [5] deserve special mention. Other important works undertaken so far for recognition of emotions through facial expressions by selecting suitable features include [2], [6], [7], [8], [9], [10], [11], and by identifying the right classifier include [3], [4], [5], [10], [12], [13], [14], [15], [16], [17].

Human facial expressions as well as facial movements change for different emotions. The significance of facial movements can be understood from the biological point of view. Human face comprises of several muscles that are responsible for the movement of every part of the face. For instance, facial attributes, such as mouth- and eye-opening, commonly used for emotion recognition in the literature, have good correlations with specific muscles in the face. Naturally, detection of motion of the muscles carries more fundamental information than their aggregation, such as smiling or raising of eyebrows. The paper aims at detecting motion of the muscles involved in smiling, raising of eyebrows, partial and total opening of the eyes, constriction of eyebrows and temporal wrinkles formed due to muscle movements in the face while experiencing a specific emotion.

The first research on emotion recognition from facial action points/units is by Ekman (1975) [11], followed by Ekman and Frisen [21]. In [22], the authors considered 46 basic facial action points to represent the movements of cheek, chin and wrinkles. Unfortunately, since its foundation, the action point features did not receive much attention, as most of the researchers emphasized on high level features comprising joint movement of several muscles rather than the primitive action points. Recently, researchers are of the views that high level features are more person-specific, and have wider variations because of cultural traits and are not always free from gender and region–bias. The action points being more fundamental and localized, offer culture-, race- and gender-independent features, containing sufficient information for automatic recognition of emotions.

Page 2: Emotion Recognition from Facial Action Points by Principal ...representing the most prominent features of the average distance matrix is evaluated. In the recognition phase, the first

The only difference between the action unit based emotion recognition scheme and the others is probably that the action point based recognition is easier for machines, while high level features are a common source of recognition by humans.

Recently Pantic and her research team [17], [18], [19], [20] employed action units in emotion recognition, and noted that the performance of their methods outperformed other high level feature based emotion recognition schemes. Among other interesting works on action point based emotion recognition, the works undertaken in [21], [22], [24], [25], [26], [27] need special mention. The paper provides an alternative approach to emotion recognition of a subject from selected 36 facial action points marked at specific locations on their faces (Fig. 1). Facial expressions obtained through acting by 10 subjects are recorded, and the changes in marked action points are measured. The measurements obtained indicate that the action points have wider variations for facial expressions containing diverse emotions. Considering 10 instances for each facial expression, representative of the same emotion, experimented over 10 subjects, we obtain a set of 100 distance matrices, where each matrix element represents the distance between any two selected action points. The 100 matrices for each individual emotion are averaged, and the first principal component, representing the most prominent features of the average distance matrix is evaluated. During the recognition phase, the first Principal component obtained from the distance matrix of a given facial expression is evaluated, and its Euclidean distance with the first Principal component of each emotion is determined. The unknown facial expression is classified into emotion class j, if the Euclidean distance between the obtained principal component and that of j-th emotion class is minimum. Classification with 120 facial images, containing equal number of samples for six emotion classes, reveals an average classification accuracy of 92.5%, the highest being in relax and disgust and the least in fear and anger.

The paper is divided into four sections. Section II provides the methodology of the proposed facial action point-based model. Section III gives a brief overview on Principal Component Analysis (PCA). In section IV, we describe the experiments in two different subsections. The former deals with the feature extraction methods while the results are displayed in the latter part. Conclusions are listed in section V.

II. METHODOLOGY We now briefly discuss the main steps involved in emotion recognition from facial action points based upon the measurements of the distance matrices for 36 facial feature points for 10-subjects, each having 10 instances of facial expression for a particular emotion. In this paper, we consider 6 basic emotions: relaxation, happiness, disgust, anger, fear and surprise. We need to classify the facial expression of an unknown person into one of the 6 emotion classes.

The methodology of our proposed algorithm has two main parts. The first part is the facial point based model construction. The second part involves evaluating the emotions of an unknown person.

A. Facial-point based model construction For constructing the facial action point based model, we selected 10 subjects and marked their faces with 36 facial action points at specific locations. As in fig. 1, different facial expressions obtained by them acting, are recorded, and changes in the co-ordinates of the marked action points are measured for different emotions. The distance between any two selected action points is considered as the feature for our proposed approach. Thus, for each facial expression marked with 36 points, we have a set of 36×36 distance matrices.

Considering 10 instances for each facial expression, representative of the same emotion, experimented over 10 subjects, we obtain a set of 100 such distance matrices. The 100 matrices for individual emotion are averaged, and the first principal component, representing the most prominent features of the average distance matrix is evaluated. Thus for 6 emotions 6 such principal component can be obtained. Now, emotion of an unknown facial expression can be evaluated by using these 6 principal component obtained from 10 known subjects. The main steps of constructing the action point based model are as follows:

1. Let, there be n subjects, marked with 36 points on their faces. Each subject exhibits k instances of each facial expression, representative of the same emotion. So, for m emotions, each subject has m × k facial expressions.

(a) (b) (c)

(d) (e) (f) Fig. 1 Facial action points in different emotions (a) Relax, (b)

Anger, (c) Disgust, (d) Fear, (e) Happy, (f) Surprise

Page 3: Emotion Recognition from Facial Action Points by Principal ...representing the most prominent features of the average distance matrix is evaluated. In the recognition phase, the first

2. The Euclidian distance between each point on a particular face is determined first. Thus for each facial expression, a set of 36×36 distance matrices can be obtained. Let, Am be the distance matrix of the m-th emotions.

Am= [dij] where, i, j ε [1, 36] For each emotion, we have to evaluate n × k such Am.

3. Then, the n × k matrices for individual emotions are

averaged. Let us denote this averaged matrix by Ml which has the same dimension (36 × 36) as of Al. Now, for m emotions, m number of Ml’s can be obtained.

4. Then the first principal component Pm, representing the most prominent features of the average distance matrix Ml, is evaluated for each emotion. Here, number of emotions m = 1 to 6. Obeying the principle of PCA, the dimension of the first principal component Pm is now reduced to 1 × 36. These m number of principal components is kept for the recognition of unknown emotion.

B. Evaluating the emotion of an unknown facial expression:

During the recognition phase, the distance matrix is evaluated from the 36 points marked on the unknown facial expression. The first Principal component obtained from the distance matrix of the facial expression is then evaluated, and its Euclidean distance with the first Principal component of each of the 6 emotions that we obtained earlier, is determined. The unknown facial expression is classified into emotion class j, if the Euclidean distance between the obtained principal component and that of j-th emotion class is minimum. The main steps for recognition of the emotion of an unknown facial expression are as follows:

1. Evaluate distance matrix for the unknown facial

expression. Thus for the unknown face, a set of 36×36 distance matrices can be obtained for 36 facial action points.

Alm= [dij]

where, i, j ε [1, 36]

2. The first principal component Punknown is then evaluated for each emotion. Obeying the principle of PCA, the dimension of Punknown is now reduced to 1 × 36.

3. Then the Euclidean distance between the obtained principal component Punknown and Pm that have been stored earlier for each m, is evaluated individually.

4. The unknown facial expression is classified into

emotion class j, if the Euclidean distance between the obtained principal component and that of j-th emotion class is minimum.

III. AN OUTLINE OF PRINCIPAL COMPONENT ANALYSIS Principal Component Analysis (PCA) [30] is a method of representing data points in a compact form by reducing their dimensions. Let us, for instance, consider a data set D= {x |xεRN}, where each data x is a point in N- dimensional space. Suppose, we are interested to find the first two principal components of D. This can be made clear with Figure 2.

The first principal component PC1 is the direction along which the points in D have a maximum variance. The second principal component PC2 is the direction orthogonal to PC1 along which the variance is maximum. The principle stated above can be extended is this manner to determine the third, fourth and higher principal components.

It is possible to transform the data points x to y by an effective transformation x→y, where xεRN and yεRP , P<N. This can be accomplished by projecting the data points onto principal subspace formed by the first p principal components. This is the basis of dimensionality reduction, and the technique of representing data is referred to as subspace decomposition.

Let, ^X be the reconstructed data point obtained from

the projections y onto the p largest principal component directions qi’’s:

p

iiiqyX

1

^ (1)

The error vector e = X- ^X is orthogonal to the approximate

data vector^X . This is called the principal of orthogonality.

x1

x2

PC2

PC1

Fig. 2 Direction of the first two principal components of the 2- dimensional data points

Page 4: Emotion Recognition from Facial Action Points by Principal ...representing the most prominent features of the average distance matrix is evaluated. In the recognition phase, the first

IV. EXPERIMENTS AND RESULTS In this section, we present the experimental details of emotion recognition using the principles introduced in section II. We here consider 6 emotion classes, (i.e., m=5) including anger, fear, disgust, surprise, happiness and relaxation. The experiment is conducted with two sets of facial expressions: a) the first set of 10 × 10 × 6 = 600 facial expressions from n (=10) subjects are considered for constructing the facial action point based model and, b) the other set of 120 facial expressions are considered to validate the result of the proposed emotion classification scheme. 36 facial points have been used here as features.

We now briefly overview the main steps of feature extraction followed by experimental results.

A. Feature Extraction Feature extraction is the fundamental step in emotion

recognition. This paper considers extraction of features from emotionally rich facial expressions synthesized by the subjects by acting. Existing research results [15], [23] reveal that apart from the eyes, lips, and eye-brow region, which are the most important facial regions responsible for the manifestation of emotion. There are some more facial regions that also have importance for emotional changes in the face, are the forehead region and some area of cheek region. This motivated us to mark 36 points on specific locations on face. Fig. 3 shows the locations of these feature points on a selected facial image.

These selected points as well as their co-ordinates shift accordingly with change of facial actions for different emotions. To keep the measurements in an emotional expression normalized and free from distance variation from the camera focal plane, we consider the distance between each point as features. So, for a particular emotional face, we have a 36×36 distance matrix for 36 feature points. The emotion recognition problem addressed here attempts to determine the emotion of an unknown person from his/her facial expression. So, for an unknown face also, we have to first evaluate the co-ordinates of the marked points and then the Euclidean distance between each point for constructing the distance matrix, as indicated below:

B. Results As mentioned earlier, the action point based model is validated by a set of 20 facial expressions of each of the 6 emotions. So, a total of 120 unknown emotional faces are tried to be recognized by the model. Experiment shows that out of the 20 facial expressions of each emotion, our model can correctly classify 20 emotional faces for emotion relax and disgust, 19 faces for emotion surprise, 18 faces for happiness, and 17 emotional facial expressions for anger and fear emotions. The results of classification of emotion by the proposed approach are summarized in Table-I. The Table provides the percentage of classification (and misclassification) of individual emotion. It is apparent from the Table that there exists a disparity in the diagonal entries of the confusion matrix. This indicates that percentage classification of individual emotion class is not uniform for all emotions. The emotions: relaxed and disgust have a classification accuracy of 100%, while emotions like anger and fear have a poor classification accuracy of barely 85%. The justification of the disparity in classification is due to individual differences in expressing an emotion. Naturally, when the principal component (PC) obtained from unknown facial instance is compared with the respective principal components of different emotion classes, the other classes having similar PC sometimes have a better matching with the test PC.

TABLE I. CONFUSION MATRIX OF EMOTION CLASSIFICATION USING PCA

Experiments on emotional facial expressions shows that

the classification accuracy of emotions Relax, Anger, Disgust, Happiness, Fear and Surprise are respectively 100%, 100%, 90%, 85% and 95%. So, the overall accuracy of recognizing emotion from facial action points through our proposed model is 92.3%.

In/Out Relax Anger Disgust Happiness Fear Surprise Relax 100 0 0 5 0 0 Anger 0 85 0 5 10 0

Disgust 0 0 100 0 0 0 Happiness 0 5 0 90 0 0

Fear 0 10 0 0 85 5 Surprise 0 0 0 0 5 95

1 2 3…………..36 12

36

d1,1 d1,2 d1,3………..d1,36 d2,1 d2,2 d2,3………..d2,36

…………

d36,1 d36,2 d36,3………d36,36

Fig. 3 Locations of Facial action points

Page 5: Emotion Recognition from Facial Action Points by Principal ...representing the most prominent features of the average distance matrix is evaluated. In the recognition phase, the first

V. PERFORMANCE ANALYSIS Two statistical tests, namely McNemar’s test and Friedman test, have been performed to compare the relative performance of our algorithm with 4 standard techniques for emotion recognition. The study was performed with our own face database, as other face databases do not include action points embedded on the facial expression.

A. The McNemar’s test

Let fA and fB be two classifiers outputs obtained by algorithm A and B, when both the algorithms used a common training set R. We now define a null hypothesis: PrR,x[fA(x)=f(x)]=PrR,x[fB(x)=f(x)], (2) where, f(x) be the experimentally induced target function to map any data point x on to specific emotion classes K, where f(x) is one of K classes. Let, n01 be the number of examples misclassified by fA but not by fB and n10 be the number of examples misclassified by fB but not by fA. Then following (2), we define a statistic,

1001

21001 )1(nn

nnZ

(3)

Let, A be our PCA based algorithm and B is one of 4 standard algorithm: Support Vector Machine (SVM), Back Propagation, Multilayer Perceptron (MLP), Radial Basis Function (RBF). We thus evaluate Z = Z1 through Z4 , where Zj denotes the comparator statistic of misclassification between the PCA (Algorithm: A) and the jth of 4 algorithm (Algorithm: B).

TABLE II

STATISTICAL COMPARISON 1 : MCNEMAR’S TEST

Table II is evaluated to obtain Z1 through Z4 and the hypothesis is rejected if, Zj> χ2

1, 0.95= 3.841459, which indicates that the probability of the null hypothesis is correct only to a level of 5% and so, we reject it. If the

hypothesis is not rejected, we consider its acceptance. The acceptance or rejected decision is also included in table II.

B. Modified Friedman Test In Friedman test, N, the number of databases usually is large. But because of lack in databases, we tested our results with only Indian (Jadavpur university) database. So, the Friedman test cannot be applied directly in the present problem. We propose a modified Friedman test, where instead of taking average rank of individual algorithm, we consider the rank obtained from classification accuracy obtained by experimenting with our database only. The null hypothesis here, states that all the algorithms are equivalent, so their ranks rj should be equal. The Friedman statistic,

jjF

kkRkk

N4

)1()1(

12 222 (4)

is distributed accordingly to χ2F with k-1 degrees of

freedom. Here, K=4 and N=1. Here, we consider percentage accuracy of classification as the basis of rank. Table III provides the percentage accuracy of classification with respect to one database.

TABLE III STATISTICAL COMPARISON 2: FRIEDMAN TEST

Now, from Table III, we obtain Rj’s. Evaluated χ2

F using (4), with N= 1, K= 5, is found to be 4. 4> χ2

1, 0.95= 3.841459, So, the null hypothesis, claiming that all the algorithms are equivalent, is wrong and, therefore, the performances of the algorithms are determined by their ranks only. It is clear from the table that the average rank of PCA is 1, claiming PCA outperforms all the algorithms by Friedman Test.

VI. CONCLUSION The distance matrix between each pair of action points is the emphasised criterion in our research. The distance matrix of an unknown facial expression thus obtained is real symmetric and provides a general framework of our methodology using Principal component analysis. The results are appealing. The diagonal entries of the confusion matrix reveal that for the emotions relax and disgust, the classification accuracy is very high of the order of 92.5%,

Reference Algorithm = PCA

Classifier Algorithm used for comparison

Parameters used for McNemar Test

Zj Comments on acceptance/ rejection of hypothesis

n01 n10

SVM 0 5 3.2 accepted

Bsck-Propagation 0 6 4.166 rejected

MLP 0 12 10.08 rejected

RBF 0 17 15.059 rejected

Classifier Algorithm j

Classification Accuracy

Ranks obtained through experiments

PCA 92.5% 1

SVM 88.3% 2

Back-Propagation

87.5 3

MLP 82.5% 4

RBF 78.3% 5

Page 6: Emotion Recognition from Facial Action Points by Principal ...representing the most prominent features of the average distance matrix is evaluated. In the recognition phase, the first

while for a few emotions, which are found often mixed with other simple emotions the results are not so convincing. Statistical tests have been used to validate that the null hypothesis claiming that performance of our algorithm is equivalently similar to the other algorithms are false and are rejected within a confidence limit of 95%. The results obtained from McNemar’s test have thereafter confirmed that our proposed algorithm outperforms all the other three competitive algorithms. The results obtained by Friedman test also indicate that our algorithm’s performance is better than the other three standard algorithms, popularly used in n the literature of emotion intelligence.

REFERENCES [1] A. Konar and A. Chakraborty (Eds.), Advances in Emotion

Recognition, Wiley-Blackwell, 2011 (to appear). [2] O. A. Uwechue and S. A. Pandya, “Human Face Recognition Using

Third-Order Synthetic Neural Networks,” Boston, MA: Kluwer, 1997.

[3] B. Biswas, A. K. Mukherjee, and A. Konar , “Matching of digital images using fuzzy logic,” AMSE Publication, vol. 35, no. 2, pp. 7–11, 1995.

[4] A.Bhavsar and H.M.Patel, “Facial Expression Recognition Using Neural Classifier and Fuzzy Mapping,” IEEE Indicon 2005 Conference, Chennai, India.

[5] Yimo Guo, Huanping Gao, “Emotion Recognition System in Images Based On Fuzzy Neural Network and HMM,” Proc. 5th IEEE Int. Conf. on Cognitive Informatics (ICCI'06), IEEE 2006

[6] M.Rizon, M. Karthigayan, S.Yaacob and R. Nagarajan, “Japanese face emotions classification using lip features,” Universiti Malaysia Perlis, Jejawi, Perlis, Malaysia, Geometric Modelling and Imaging (GMAI'07), IEEE 2007.

[7] H. Kobayashi and F. Hara, “Measurement of the strength of six basic facial expressions by neural network,” Trans. Jpn. Soc. Mech. Eng. (C), vol. 59, no. 567, pp. 177–183, 1993.

[8] H. Kobayashi and F. Hara, “Recognition of mixed facial expressions by neural network,” Trans. Jpn. Soc. Mech. Eng. (C), vol. 59, no. 567, pp. 184–189, 1993.

[9] H. Kobayashi and F. Hara , “The recognition of basic facial expressions by neural network,” Trans. Soc. Instrum. Contr. Eng, vol. 29, no. 1, pp. 112–118, 1993.

[10] H.Tsai, Y.Lai and Y.Zhang, “Using SVM to design facial expression recognition for shape and texture features,” Proceedings of the Ninth International Conference on Machine Learning and Cybernetics, Qingdao, 11-14 July 2010, IEEE 2010

[11] P. Ekman and W. V. Friesen, “Unmasking the Face: A Guide to Recognizing Emotions From Facial Clues,” Englewood Cliffs, NJ: Prentice-Hall, 1975.

[12] H.Zhao, Z.Wang and J.Men, “Facial Complex Expression Recognition Based on Fuzzy Kernel Clustering and Support Vector Machines,” Third International Conference on Natural Computation (ICNC 2007), IEEE 2007.

[13] Y.Lee, C.W.Han and J.Shim, “Fuzzy Neural Networks and Fuzzy Integral Approach to Curvature-based Component Range Facial Recognition,” 2007 International Conference on Convergence Information Technology, IEEE 2007

[14] J. M. Sun, X. S. Pei and S. S. Zhou, “Facial Emotion Recognition in modern distant system using SVM,” Proceedings of the Seventh International Conference on Machine Learning and Cybernetics, Kunming, July 2008, IEEE 2008

[15] S. Das, A. Halder, P. Bhowmik, A. Chakraborty, A. Konar, A. K. Nagar, “Voice and Facial Expression Based Classification of Emotion

Using Linear Support Vector,” 2009 Second International Conference on Developments in eSystems Engineering, IEEE 2009

[16] Rosalind W. Picard, E. Vyzas, Jennifer Healey, “Toward Machine Emotional Intelligence: Analysis of Affective Physiological State”, IEEE Trans. Pattern Anal. Mach. Intell, vol. 23, no. 10, pp. 1175-1191 2001.

[17] Maja Pantic, Ioannis Patras, “Dynamics of Facial Expression: Recognition of Facial Actions and Their Temporal Segments From Face Profile Image Sequences”, IEEE Transactions on Systems, Man, and Cybernetics, Part B, vol. 36, no. 2, pp. 433-449,2006.

[18] M. Pantic, I. Patras and L.J.M. Rothkrantz, “Facial action recognition in face profile image sequences”, Proc. IEEE Int Conf.ICME, pp. 37-40, 2002.

[19] M.F.Valstar and M.Pantic, “ Combined Support Vector Machines and Hidden Markov Models for modelling Facial Action Temporal Dynamics”, Article in a conference proceedings.

[20] Michel Valstar and Maja Pantic, “Fully Automatic Facial Action Unit Detection and Temporal Analysis”, Proceedings of the 2006 IEEE Computer society conference on Computer Vision and Pattern Recognition Workshop(CVPRW’06).

[21] P. Ekman and W. Friesen. Facial Action Coding System. Palo Alto: Consulting Psychologist Press, 1978.

[22] Paul Ekman, Methods For Measuring Facial Action, In Scherer, K. R. and Ekman, P. (Eds.), Handbook of Methods In Nonverbal Bahavior Research. New York: Cambridge University Press, 1982. Pp. 45-135.

[23] A.Chakraborty, A.Konar, U.K.Chakraborty and A.Chatterjee, “Emotion Recognition From Facial Expressions and Its Control Using Fuzzy Logic”, IEEE Transactions on Systems, Man and Cybernetics, IEEE 2009

[24] Mahmoud Khademi, Mohammad Taghi Manzuri-Shalmani, Mohammad Hadi Kiapour, and Ali Akbar Kiaei, “Recognizing Combinations of Facial Action Units with Different Intensity Using a Mixture of Hidden Markov Models and Neural Network”, N. El Gayar, J. Kittler, and F. Roli (Eds.): MCS 2010, LNCS 5997, pp. 304–313, 2010.

[25] Yan Tong, Wenhui Liao, Qiang Ji, “Inferring Facial Action Units with Causal Relations”, Proceedings of the 2006 IEEE Computer society conference on Computer Vision and Pattern Recognition(CVPR’06).

[26] Ying-li Tian, Takeo Kanade and Jeffrey F. Cohn, “Evaluation of Gabor-Wavelet-Based Facial Action Unit Recognition in Image Sequences of Increasing Complexity”,Article in a journal:

[27] Jeffrey F. Cohn, Adena J. Zlochower, James Lien and Takio Kanade, “Automated face analysis by feature point tracking has high concurrent validity with manual FACS coding”, Psychophysiology, 36 ~1999!, 35–43. Cambridge University Press. Printed in the USA.

[28] Marian Stewart Bartlett, Paul A. Viola, Terrence J. Sejnowski, Beatrice A. Golomb, Jan Larsen, Joseph C. Hager, Paul Ekman, “ Classifying Facial Action”, Advances in neural information processing systems 8 (NIPS * 96), D. Touretzky, M .Mozer and M.Hasselmo (Eds.), MIT Press, 1996 pp.823-829.

[29] Chung-Lin Huang and Yu-Ming Huang, “Facial Expression Recognition using Model- Based Feature Extraction and Action Parameters Classification”, Journal of Visual Communication and Image representation Vol.8,No. 3,September, pp.278-290,1997.

[30] Amit Konar, Computational Intelligence: Principles, Techniques and Applications.Springer Berlin Heidelberg, New York, 2005.

[31] Janez Demsar, “Statistical Comparisons of Classifiers over Multiple Data Sets,” Journal of Machine Learning Research 7 , pp. 1-30, 2006.

[32] Thomas G. Dietterich, “ Approximate Statistical Tests for Comparing Supervised Classification Learning Algorithms,”