Download - 19 February 2008
The
Uni
vers
ity
of A
uckl
and
New
Zea
land
Matthias WimmerMatthias WimmerTechnische Universitat MTechnische Universitat Müünchennchen
Bruce MacDonald, Dinuka Jayamuni, and Arpit YadavBruce MacDonald, Dinuka Jayamuni, and Arpit YadavDepartment of Electrical and Computer Engineering, AucklandDepartment of Electrical and Computer Engineering, Auckland
http://robotics.ece.auckland.ac.nzhttp://robotics.ece.auckland.ac.nz
19 February 200819 February 2008
Facial Expression for Human-Facial Expression for Human-Robot Interaction – A prototypeRobot Interaction – A prototype
The
Uni
vers
ity
of A
uckl
and
New
Zea
land
Outline
MotivationBackgroundFacial expression recognition methodResults on a data set
Results with a robot (the paper contribution)
Conclusions
The
Uni
vers
ity
of A
uckl
and
New
Zea
land
Motivation: Goal
Our Robotics group goals:To create mobile robotic assistants for humansTo make robots easier to customize and to program
by end usersTo enhance interactions between robots and humansApplications: healthcare, eg aged careApplications: agriculture (eg Ian's previous presentation)
(Lab visit this afternoon)Robotface
The
Uni
vers
ity
of A
uckl
and
New
Zea
land
Motivation: robots in human spaces
Increasingly, robots live in human spaces and interact closely
InTouch remote doctor
The
Uni
vers
ity
of A
uckl
and
New
Zea
land
Motivation: close interactions
RI-MAN
http://www.bmc.riken.jp/~RI-MAN/index_us.html
The
Uni
vers
ity
of A
uckl
and
New
Zea
land
Motivation: different types of robot
Robots have many forms; how do people react?
Pyxis HelpMate SP Robotic Courier SystemDelta Regional Medical Centre, Greenville, Mississippi
The
Uni
vers
ity
of A
uckl
and
New
Zea
land
Motovation: different robot behaviour
AIBO (Sony)
Paro the therapeutic baby seal robot companionhttp://www.aist.go.jp/aist_e/latest_research/2004/20041208_2/20041208_2.html
The
Uni
vers
ity
of A
uckl
and
New
Zea
land
Motivation: supporting the emotion dimension
Robots must give support with psychological dimensionshome and hospital helptherapycompanionship
We must understand/design the psychology of the exchangeEmotions play a significant roleRobots must respond to and display emotionsEmotions support cognitionRobots must have emotional intelligenceEg during robot assisted learningEg security screening robotsHumans’ anxiety can be reduced if a robot responds well [Rani et al, 2006]
The
Uni
vers
ity
of A
uckl
and
New
Zea
land
Motivation: functionality of emotion response
Not just to be “nice”; the emotion dimension is essential to effective robot functionality [Breazeal]
The
Uni
vers
ity
of A
uckl
and
New
Zea
land
Motivation: robots must distinguish human emotional state
However, recognition of human emotions is not straightforwardOutward expression versus internal mood states
People smile when happy AND they are interacting with humans
Olympic medalists don’t smile until the presenter appears (eg 1948 football team)
Ten pin bowlers smile when they turn back to their friends
The
Uni
vers
ity
of A
uckl
and
New
Zea
land
Motivation: deciphering human emotions
Self-reports are more accurate than observer ratingsCurrent research attempts to decipher human emotions
facial expressionsspeech expressionheart rate, skin temperature, skin conductivity
www.cortechsolutions.com
The
Uni
vers
ity
of A
uckl
and
New
Zea
land
Motivation: Our focus is on facial expressions
Despite the limitations, we focus on facial expression interpretation from visual information.Portable, contactlessNeeds no special nor additional sensorsSimilar to humans' interpretation of emotions (which is by vision and speech)No interference with normal HRI
www.euron.org
Asimo
The
Uni
vers
ity
of A
uckl
and
New
Zea
land
Background
Six universal facial expressions (Ekman et al.) Laughing, surprised, afraid, disgusted, sad, angry
Cohn-Kanade-Facial-Expression database (488 sequences, 97 people)PerformedExaggerated
Determined byShapeMuscle motion
The
Uni
vers
ity
of A
uckl
and
New
Zea
land
Background: Why are they difficult to estimate?
Different faces look differentHair, beard, skin-color, …
Different facial posesOnly slight muscle activity
The
Uni
vers
ity
of A
uckl
and
New
Zea
land
Background
Typical FER process [Pantic & Rothkrantz, 2000]
The
Uni
vers
ity
of A
uckl
and
New
Zea
land
Background: Challenges
1. Face detection and 2. feature extraction challenges:Varying shape, colour, texture, feature location, hairSpectacles, hatsLighting conditions including shadows
3. Facial expression classification challenges:Machine learning
The
Uni
vers
ity
of A
uckl
and
New
Zea
land
Background: related work
Cohen et al: 3D wireframe with 16 surface patchesBezier volume parameters for patchesBayesian network classifiersHMMs model muscle activity over time
Bartlett et al: Gabor filters using AdaBoost, Support Vector93% accuracy on Cohn-Kanade DBIs tuned to DB
The
Uni
vers
ity
of A
uckl
and
New
Zea
land
Background: challenges for robots
Less constrained face pose and distance from cameraHuman may not be facing the robotHuman may be movingMore difficulty in controlling lightingRobots move away!Real time result is needed (since the robot moves)
The
Uni
vers
ity
of A
uckl
and
New
Zea
land
Facial expression recognition (FER) methodMatt’s model based approach
The
Uni
vers
ity
of A
uckl
and
New
Zea
land
FER method
Cootes et al statistics based deformable model (134 points)Translation, scaling, rotationVector b of 17 face configuration parametersRotate head b1, open mouth b3, change gaze direction b10
The
Uni
vers
ity
of A
uckl
and
New
Zea
land
FER method: Model-based image interpretation
The model The model contains a parameter vector that represents the model’s configuration.
The objective function Calculates a value that indicates how accurately a parameterized model matches an image.
The fitting algorithm Searches for the model parameters that describe the image best, i.e. it minimizes the objective function.
The
Uni
vers
ity
of A
uckl
and
New
Zea
land
FER method
Two step process for skin colour: see [Wimmer et al, 2006]Viola & Jones technique detects a rectangle around the faceDerive affine transformation parameters of the face modelEstimate b parametersViola & Jones repeatedFeatures are learned to localize face featuresObjective function compares an image to a modelFitting algorithm searches for a good model
The
Uni
vers
ity
of A
uckl
and
New
Zea
land
FER method: learned objective function
Reduce manual processing requirements by learning the objective function [Wimmer et al, 2007a & 2007b]Fitting method: hill-climbing
The
Uni
vers
ity
of A
uckl
and
New
Zea
land
FER method
Facial feature extraction:Structural (configuration b) and temporal features (2 secs)
Expression classificationBinary decision tree classifier is trained on 2/3 of data set
The
Uni
vers
ity
of A
uckl
and
New
Zea
land
Results on a dataset
Happiness and fear have similar muscle activity around the mouth, hence theconfusion between them.
The
Uni
vers
ity
of A
uckl
and
New
Zea
land
Results on a robot
B21r robotSome controlled lightingHuman about 1m away120 readings of three facial expressions12 frames a second possibleTests at 1 frame per second
The
Uni
vers
ity
of A
uckl
and
New
Zea
land
Conclusions
Robots must respond to human emotional statesModel based FER technique (Wimmer)70% accuracy on Cohn-Kanade data set (6 expressions)67% accuracy on a B21r robot (3 expressions)
Future work: better FER is neededImproved techniquesBetter integration with robot softwareImprove accuracy by fusing vital signs measurements