lyu0603 a generic real-time facial expression modelling system supervisor: prof. michael r. lyu...

Post on 19-Dec-2015

222 Views

Category:

Documents

7 Downloads

Preview:

Click to see full reader

TRANSCRIPT

LYU0603

A Generic Real-Time Facial Expression Modelling System

Supervisor:

Prof. Michael R. Lyu

Group Member:

Cheung Ka Shun (05521661)

Wong Chi Kin (05524554)

Outline

• Previous Work• Objectives• Work in Semester Two• Review of implementation tool• Implementation

– Virtual Camera– 3D Face Generator– Face Animation

• Conclusion• Q&A

Previous Work

• Face analysis– Detect the facial expression– Draw corresponding model

Objectives

• Enrich the functionality of web-cam

• Make net-meeting more interesting

• Users are not required to pay extra cost on specific hardware

• Extract human face and approximate face shape

Work in Semester Two

• Virtual Camera– Make Facial Expression Modelling to be

available in net-meeting software

• Face Generator– Approximate face shape– Generate a 3D face texture

• Face Animation– Animate the generated 3D face– Convert into standard file format

Review - DirectShow

• Filter graph– Source Transform Renderer

Review - Direct3D

• Efficiently process and render 3-D scenes to a display, taking advantage of available hardware

• Fully Compatible with DirectShow

Virtual Camera

• Focus on MSN Messenger

Virtual Camera• Two components

– 3D model as output– Face Mesh Preview

Virtual Camera

• Actually it is a source filter

Virtual Camera

FaceExModelingSample Grabber Null RendererFaceCoord

Capture Device

Virtual Camera

Video Renderer

Create an Inner Filter Graph

• Inner filter graph in virtual camera

Virtual Camera

Demonstration

• We are going to play a movie clip which demonstrate Virtual Camera

3D Face Generator

• Aims: To approximate the human face and shape

• Comprise two parts

Face Texture Generator

FaceLab

Common Buffer

Face Texture Generator

FaceLab

Common Buffer

FaceLab

• Adopted from the face analysis project of Zhu Jian Ke, CUHK CSE Ph.D. Student

• The analysis is decomposed into training and building part

• The whole training phase is made up of three steps

3D Data Acquisition 3D Data Registration Shape Model Building

• However, thousands of point are demanded to describe the complex structure of human face

• It can be acquired either by 3D scanner or computer vision algorithm

FaceLab – Data Acquisition

• To acquire human face structure data

• In 2D face modelling, 100 feature points are sufficient to represent the face surface

FaceLab – Data Registration

• To normalize the 3D data into same scale with correspondences

• Problem– The most accurate way is to compute

3D optical flow – Commercial 3D scanners and 3D

registration are computed with specific hardware

FaceLab

• To simplify the process, it is decided to use software to generate the human face data.

• Each has a set of 752 3D vertex data to describe the shape of face

FaceLab – Shape Model Building

• A shape is defined as a geometry data by removing the translational, rotational and scaling components.

• The object containing N vertex data is represented as a matrix below

ZYXS

FaceLab – Shape Model Building

• The set of P shapes will form a point cloud in 3N-dimensional space which is a huge domain.

• A conventional principle component analysis (PCA) is performed.

FaceLab – Shape Model BuildingPCA Implementation

• It performs an orthogonal linear transform

• A new coordinate system which points to the directions of maximum variation of the point cloud.

• In this Implementation, the covariance method is used.

FaceLab – Shape Model BuildingPCA Implementation

• Step 1: Compute the empirical mean

which is the mean shape along each dimension

P

iiSP

S1

1

S

FaceLab – Shape Model BuildingPCA Implementation

• Step 2: Calculate the covariance matrix C

• The axes of the point cloud are collected from the eigenvectors of the covariance matrix.

Ti

P

ii SSSS

PC

1

1

FaceLab – Shape Model BuildingPCA Implementation

• Step 3: Compute the matrix of eigenvectors V

where D is the eigenvalue matrix of C

• The eigenvalue represents the distribution of the objects data’s energy

DCVV 1

FaceLab – Shape Model BuildingPCA Implementation

• Final Step: Represent the resulted shape model as

where ms are the shape parameters

• Adjusting the value of the shape parameters can generate a new face model by computing

ssmVSS

)( SSVm Tss

FaceLab – Shape Model BuildingPCA Implementation

• An extra step: Select a subset of the eigenvectors

• The eigenvalue represents the variation of the corresponding axis

• The first seven columns are used in the system and achieve a majority of the total variance.

FaceLab – Render the face model

• The resulted data set is a 3D face mesh data

• Use OpenGL to render it

Face Texture Generator

FaceLab

Common Buffer

System Overview of Face Texture Generator

Facial Expression Modelling Face Texture Generator

Face Texture Generator

• Face texture extraction– Three Approaches

• Largest area triangle aggregation

• Human-defined triangles aggregation

• Single photo on effect face

Largest area triangle aggregation

Left face Right faceFront face

Largest area triangle aggregation

Left Face Front Face Right Face

for each triangle in face mesh

Extract the triangle with largest area

Face texture

……

Largest area triangle aggregation

• ResultTriangles was sampled from

different photo

Human-defined triangles aggregation

• Divide the face mesh into three parts

• Define the particular photo to be sampled in triangles in each region

• Reduce fragmentation

Human-defined triangles aggregation

• Redefine the face mesh – Effect Face

Human-defined triangles aggregation

• Result

Single photo on effect face

• Similar to Human-defined triangles aggregation

• Use a single photo for pixel sampling

• Use Effect Face as outline

Single photo on effect face

• Result

Face Generator Filter

Dynamic Texture Generation

• To get back the rendered data from the video display card

Face Generator FaceLab

Common Buffer

Video Card Buffer

Dynamic Texture Generation

• Lock the video display buffer

Face Generator FaceLab

Common Buffer

Video Card Buffer

Lock and copy the buffer to the common

buffer

Dynamic Texture Generation

Face Generator FaceLab

Common Buffer

Video Card Buffer

Update the texture with the common buffer

• Common buffer content is changed• Update the texture buffer to reflect the

changes immediately

Dynamic Texture Generation

• From 2D face mesh to 3D face mesh

Completed 3D Face Generator

Demonstration

• We are going to play a movie clip which demonstrates Face Generator

Face Viewer

Generate simple animation

Looking at the mouse cursor

• Feature points provide sufficient information to locate the eye

• The two eyes will form a triangle planar with the mouse cursor

Generate simple animation

Eye blinking

• One of the natural movements on a human face

• Adjust the vertex geometry on the eyelids

• The eyebrow is also needed to move backward

Generate simple animation

• A lot of muscle needed to be modified

• The cheeks are being pushed up and wide

• The chin is being pulled down

• To change the shape of the lip

Smiling

Convert into standard file format

• Save the mesh in .x file format

• Current selected face mesh data would be saved

• The Microsoft DirectX .x file format is not specific

• It can be used by any other application

Demonstration

• We are going to play a movie clip which demonstrate Face Animation

Conclusion

• We have achieved our goals– Enrich functionality of webcam– Make net-meeting become more

interesting– Extract human face and approximate face

shape– Discover the potential of face recognition

technology

End

• Thank you!

• Q&A

top related