eheritage of shadow puppetry: creation and...

10
eHeritage of Shadow Puppetry: Creation and Manipulation Min Lin *1 , Zhenzhen Hu *2 , Si Liu 1 , Meng Wang 2 , Richang Hong 2 , Shuicheng Yan 1 1 ECE Department, National University of Singapore 2 School of Computer and Information, Hefei University of Technology {mavenlin, huzhen.ice, eric.mengwang, hongrc.hfut}@gmail.com, {dcslius, eleyans}@nus.edu.sg ABSTRACT To preserve the precious traditional heritage Chinese shadow puppetry, we propose the puppetry eHeritage, including a creator module and a manipulator module. The creator module accepts a frontal view face image and a profile face image of the user as input, and automatically generates the corresponding puppet, which looks like the original person and meanwhile has some typical characteristics of traditional Chinese shadow puppetry. In order to create the puppet, we first extract the central profile curve and warp the reference puppet eye and eyebrow to the shape of the frontal view eye and eyebrow. Then we transfer the puppet texture to the real face area. The manipulator module can accept the script provided by the user as input and automatically gener- ate the motion sequences. Technically, we first learn atomic motions from a set of shadow puppetry videos. A scripting system converts the user’s input to atomic motions, and fi- nally synthesizes the animation based on the atomic motion instances. For better visual effects, we propose the sparsity optimization over simplexes formulation to automatically as- semble weighted instances of different atomic actions into a smooth shadow puppetry animation sequence. We evaluate the performance of the creator module and the manipula- tor module sequentially. Extensive experimental results on the creation of puppetry characters and puppetry plays well demonstrate the effectiveness of the proposed system. Categories and Subject Descriptors J.5 [Arts and Humanities]: Arts, fine and performing Keywords Chinese shadow puppetry; face rendering; sparsity optimiza- tion over simplex; animation 1. INTRODUCTION Shadow puppetry has a long history in China, Indone- sia, India, Greece, etc. as a form of entertainment for both * indicates equal contribution Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full cita- tion on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re- publish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. MM’13, October 21–25, 2013, Barcelona, Spain. Copyright 2013 ACM 978-1-4503-2404-5/13/10 ...$15.00. http://dx.doi.org/10.1145/2502081.2502104. children and adults, and is also popular in many other coun- tries around the world 1 . We focus on Chinese shadow pup- petry in this work. Chinese shadow puppetry, as shown in Figure 1(a), is a traditional artistic form of theater per- formance with colorful silhouette figures. These figures are produced with puppets made of leather or paper. A tradi- tional Chinese shadow puppet 2 is composed of a head part shown in Figure 1(b) and a body part shown in Figure 1(c). As an ancient form of storytelling, sticks and flat puppets are manipulated behind an illuminated background to cre- ate moving pictures [1]. By moving both the puppets and the light source, various effects can be achieved. But in the 21st century, shadow puppetry in China is on a steep and fast decline. Audiences and apprentices are evaporating at an alarming rate. To protect this ancient artistic heritage, China’s State Council put Chinese shadow puppetry in the first list of National Intangible Cultural Heritage of China in 2006, and United Nations Educational, Scientific and Cul- tural Organization (UNESCO) listed the artistic form in In- tangible Cultural Heritage in 2011 3 . Nowadays, the preservation of culture heritage attracts growing attentions of the world. Our artistic motivation is to develop a system by applying the latest multimedia tech- nologies to aid preservation, interpretation, and dissemina- tion of this ancient cultural heritage. To attract more people to be interested in shadow puppetry, we design two puppetry modules, including a creator module and a manipulator module that people all over the world can experience the fun of the puppet creating and performing by themselves. The input of the creator module is two face images. One of the most important advantages of our creator module is personalization. The making process of a puppet includes seven steps which are all complex and ingenious 4 . Our creator module can automatically generate anyone’s person- alized puppet, and the process is almost immediate. We pay special attention to keeping the characteristics of the tra- ditional puppet during the automatic creator process. For example, the puppet’s face has long narrow eyes, a small mouth and a straight bridge of nose as shown in Figure 1(b). Technically, we extract the central profile curve from the profile face image and warp the reference puppet eye into 1 http://en.wikipedia.org/wiki/Shadow puppetry 2 In this paper, we make puppet and puppetry interchange- able. 3 http://www.unesco.org/culture/ich/index.php? lg=en&pg=00011&RL=00421 4 http://www.travelchinaguide.com/intro/focus/shadow- puppetry.htm Area Chair: Marc Cavazza 183

Upload: others

Post on 08-Aug-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: eHeritage of shadow puppetry: creation and manipulationcolalab.org/media/paper/eHeritage_of_Shadow...21st century, shadow puppetry in China is on a steep and fast decline. Audiences

eHeritage of Shadow Puppetry: Creation and Manipulation

Min Lin∗1, Zhenzhen Hu∗2 , Si Liu1, Meng Wang2, Richang Hong2, Shuicheng Yan1

1ECE Department, National University of Singapore2School of Computer and Information, Hefei University of Technology

{mavenlin, huzhen.ice, eric.mengwang, hongrc.hfut}@gmail.com,{dcslius, eleyans}@nus.edu.sg

ABSTRACTTo preserve the precious traditional heritage Chinese shadowpuppetry, we propose the puppetry eHeritage, including acreator module and a manipulator module. The creatormodule accepts a frontal view face image and a profile faceimage of the user as input, and automatically generates thecorresponding puppet, which looks like the original personand meanwhile has some typical characteristics of traditionalChinese shadow puppetry. In order to create the puppet, wefirst extract the central profile curve and warp the referencepuppet eye and eyebrow to the shape of the frontal vieweye and eyebrow. Then we transfer the puppet texture tothe real face area. The manipulator module can accept thescript provided by the user as input and automatically gener-ate the motion sequences. Technically, we first learn atomicmotions from a set of shadow puppetry videos. A scriptingsystem converts the user’s input to atomic motions, and fi-nally synthesizes the animation based on the atomic motioninstances. For better visual effects, we propose the sparsityoptimization over simplexes formulation to automatically as-semble weighted instances of different atomic actions into asmooth shadow puppetry animation sequence. We evaluatethe performance of the creator module and the manipula-tor module sequentially. Extensive experimental results onthe creation of puppetry characters and puppetry plays welldemonstrate the effectiveness of the proposed system.

Categories and Subject DescriptorsJ.5 [Arts and Humanities]: Arts, fine and performing

KeywordsChinese shadow puppetry; face rendering; sparsity optimiza-tion over simplex; animation

1. INTRODUCTIONShadow puppetry has a long history in China, Indone-

sia, India, Greece, etc. as a form of entertainment for both

∗indicates equal contribution

Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full cita-tion on the first page. Copyrights for components of this work owned by others thanACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re-publish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from [email protected]’13, October 21–25, 2013, Barcelona, Spain.Copyright 2013 ACM 978-1-4503-2404-5/13/10 ...$15.00.http://dx.doi.org/10.1145/2502081.2502104.

children and adults, and is also popular in many other coun-tries around the world1. We focus on Chinese shadow pup-petry in this work. Chinese shadow puppetry, as shownin Figure 1(a), is a traditional artistic form of theater per-formance with colorful silhouette figures. These figures areproduced with puppets made of leather or paper. A tradi-tional Chinese shadow puppet2 is composed of a head partshown in Figure 1(b) and a body part shown in Figure 1(c).As an ancient form of storytelling, sticks and flat puppetsare manipulated behind an illuminated background to cre-ate moving pictures [1]. By moving both the puppets andthe light source, various effects can be achieved. But in the21st century, shadow puppetry in China is on a steep andfast decline. Audiences and apprentices are evaporating atan alarming rate. To protect this ancient artistic heritage,China’s State Council put Chinese shadow puppetry in thefirst list of National Intangible Cultural Heritage of China in2006, and United Nations Educational, Scientific and Cul-tural Organization (UNESCO) listed the artistic form in In-tangible Cultural Heritage in 20113.

Nowadays, the preservation of culture heritage attractsgrowing attentions of the world. Our artistic motivation isto develop a system by applying the latest multimedia tech-nologies to aid preservation, interpretation, and dissemina-tion of this ancient cultural heritage. To attract more peopleto be interested in shadow puppetry, we design two puppetrymodules, including a creator module and a manipulatormodule that people all over the world can experience the funof the puppet creating and performing by themselves.

The input of the creator module is two face images. Oneof the most important advantages of our creator module ispersonalization. The making process of a puppet includesseven steps which are all complex and ingenious 4. Ourcreator module can automatically generate anyone’s person-alized puppet, and the process is almost immediate. We payspecial attention to keeping the characteristics of the tra-ditional puppet during the automatic creator process. Forexample, the puppet’s face has long narrow eyes, a smallmouth and a straight bridge of nose as shown in Figure 1(b).Technically, we extract the central profile curve from theprofile face image and warp the reference puppet eye into

1http://en.wikipedia.org/wiki/Shadow puppetry2In this paper, we make puppet and puppetry interchange-able.3http://www.unesco.org/culture/ich/index.php?lg=en&pg=00011&RL=004214http://www.travelchinaguide.com/intro/focus/shadow-puppetry.htm

Area Chair: Marc Cavazza 183

Page 2: eHeritage of shadow puppetry: creation and manipulationcolalab.org/media/paper/eHeritage_of_Shadow...21st century, shadow puppetry in China is on a steep and fast decline. Audiences

(a)

(c) (f)

(e)(d)

(b)

Shadow puppetry culture

Module I: Puppetry creator

sit walkbow

… … … …

Puppet.sit(duration=20)……Puppet.walk(distance=10,duration=15)……Puppet.bow(duration=20)……

# Script

Module II: Puppetry manipulator

Figure 1: (a) A scene of traiditonal Chinese shadow puppetry. When light penetrates through a translucent sheet of

cloth, the audience will see the “shadows”, silhouettes in profile. (b) Faces of puppets are characterized by exquisite

headdresses and representative facial features, such as thin eyebrows. (c) The puppets are manipulated with sticks

through three keypoints, i.e., neck, left hand and right hand. (d) and (e) are two results of our puppetry creator

module for male and female respectively. (f) is an exemplar sequence generated based on user-provided scripts by the

puppetry manipulator module. All of the figures in this paper are best viewed in original PDF file.

the frontal view eye simultaneously. Then we transfer thepuppet texture as the texture of the profile curve. One maleand one female puppet generated by our creator module areshown in Figure 1(d) and Figure 1(e).

We also design a manipulator module. For this module, weaim to preserve the performance pattern of Chinese shadowpuppetry during the shadow puppetry manipulation process.There are several basic puppet motion patterns (denoted asatomic actions afterwards), such as walk, dance, fight, nod,laugh, etc. Besides, as shown in Figure 1(c), the real puppetis controlled with three sticks which are fixed on the pup-pet’s neck and two hands separately, and the motion patternof other puppet parts is affected by gravity. Therefore, weneed to simulate all the atomic actions in the puppet style.There are some existing research works on shadow puppetsfocusing on the user’s body interaction with the virtual pup-pet, i.e., the puppet’s motion imitates the user’s motion[2],[3],[4]. But all of them cannot preserve the puppet’s spe-cific motion style. To the contrary, our module can conveythis traditional artistic charm completely. Our manipulatormodule can directly accept text scripts as input and displaythe specified motions accordingly. The interface is shown inFigure 1(f). For manipulation, we identify atomic motionsfor the animation and collect instances from a set of shadowpuppetry videos. A scripting system converts the user inputinto atomic motions, and finally synthesizes the animationusing the collected instances.

The organization of the rest of this paper is as follows.In Section 2, we provide a review of related work. Section3 makes an overview of our puppetry module: creator andmanipulator. Next, in Section 4 and 5, more detailed stepby step introduction of the modules is presented. The ex-perimental results are shown later in Section 6. Finally, weconclude the paper in Section 7.

2. RELATED WORKOur shadow puppetry system is built on several areas of

related work.

2.1 eHeritageRecently, for the propose of preserving cultural heritage

through the application of advanced computing technolo-gies, eHeritage becomes a hot research topic. The related

literature is mainly divided into two categories: tangiblecultural heritages and intangible cultural heritages. Tangi-ble cultural heritages5 include buildings and historic places,monuments, artifacts, etc. Anna Paviotti et al. [5] dealtwith the problem of estimating the lighting field of the mul-tispectral acquisition of frescoes by a variational method.Lior Wolf et al. [6] studied the task of finding the joins ofthe Cairo Genizah which is a precious collection of mainlyJewish texts. Tao Luo et al. [7] presented a multi-scaleframework to generate 3D line drawing for archaeological il-lustration. Intangible culture heritage6 includes traditionalfestivals, oral traditions, oral epics, customs, ways of life,traditional crafts, etc. Markus Seidl et al. [8] proposed adetection of gradual transitions in historic material. Anu-pama Mallik et al. [9] studied on the preservation of Indianclassical dance. The shadow puppetry belongs to intangiblecultural heritages.

2.2 Face RenderingDigital image processing provides a solid foundation for

building artistic rendering algorithms. All image-based artis-tic rendering approaches utilize image processing operationsin some forms to extract information or synthesize results[10]. Non-photorealistic rendering focuses on enabling awide variety of expressive styles of digital arts. Among thesearts, cartoon and paper cutting are the most related to ourwork.

Cartoon: A cartoon is a form of two-dimensional il-lustrated visual art with a typically non-realistic or semi-realistic drawing or painting. CharToon system [11] pro-vides special skeleton-driven components, an extensive setof building blocks to design faces and the support to re-use components and pieces of animations. Chen et al. [12]explored a Pictoon system that allows users to create a per-sonalized cartoon and animation from an input face imagebased on sketch generation and cartoon stroke rendering.

Paper Cutting: Artistic paper cutting is also a tradi-tional and popular Chinese decorative art which, usually

5http://www.unesco.org/new/en/cairo/culture/tangible-cultural-heritage/6http://www.unesco.org/new/en/cairo/culture/intangible-cultural-heritage/

184

Page 3: eHeritage of shadow puppetry: creation and manipulationcolalab.org/media/paper/eHeritage_of_Shadow...21st century, shadow puppetry in China is on a steep and fast decline. Audiences

in a very concise two-tone form, has its unique beauty ofexpressive abstraction. Xu et al. [13] generated arrange-ments of shapes via a multilayer thresholding operation tocompose digital paper cutting designs. M. Meng et al. [14]rendered paper cutting images from human portraits. Theylocalized facial components and used pre-collected represen-tative paper cutting templates. Then they obtained a syn-thesized paper cutting image by matching templates withthe bottom-up proposals.

Our work differs from cartoon and paper cutting in thatwe only focus on the face rendering of shadow puppetry,which has its unique characteristics. Thus, very tailoredimage processing techniques are required.

2.3 Puppetry AnimationDeveloping an interactive and user-friendly interface for

people of all skill levels, to create animation is a long-standingproblem in computer graphics. Recently, a few works havebeen conducted on digital puppetry. As a visualization toolfor traditional cinematic animation, digital puppetry trans-forms the movements of a performer to the actions of ananimated character to provide live performance such as [15].

Shadow puppets are also used to produce animated films,but producing an animated film with shadow puppets, frameby frame, is laborious and time-consuming. The solution ofanimation performed by two-dimensional puppets appearsonly recently. Hsu et al. [16] introduced a motion planningtechnique which automatically generates the animation of2D puppets. Barnes et al. [17] created a video-based anima-tion interface. Users first create a cast of physical puppetsand move these puppets to tell a story. They tracked themotions and rendered an animate video. Tan et al. [18] pre-sented a method for interactive animation of 2D shadow playpuppets by real-time visual simulating using texture map-ping, blending techniques, and lighting and blurring effects.Kim et al. [19] controlled 3D avatars to create user-designedpeculiar motions of avatars in real-time using general inter-faces. ShadowStory [20] was a project created by Fei etal. for digital storytelling inspired by traditional Chineseshadow puppetry. The system allows children to design theirown puppets and animate them with a tablet PC. Pan et al.[21] presented a 2D shape deformation of the triangulatedcartoon which is driven by its skeleton and the animationcan be obtained by retargeting the skeleton joints to theshape.

There are some websites where users can entertain withthe puppetry, such as Puppet Parade7, an interactive pup-petry installation that animates puppets by tracking thearms of the puppeteers, or We Be Monsters8, a collaborativepuppet art installation that tracks multiple skeletons to ani-mate a puppet. Although shadow puppetry is not a focus onthese websites, the body motion is used to control the pup-pets. All these puppetry methods do not very accuratelyconsider the unique motion style of shadow puppetry.

With the development and increasing popularity of Mi-crosoft Kinect Camera, many researchers begin to explorehow to achieve human-puppetry interaction via Kinect. RobertHeld et al. [22] presented a 3D puppetry system that allowsusers to quickly create 3D animations by performing themotions with their own familiar, rigid toys, props, and pup-pets. During a performance, the puppeteer physically ma-

7http://design-io.com/projects/PuppetParadeCinekid/8http://wearemonsters.net/

Script... ... ... ...

Atomic motion sequence

bowsit kneel

jump ...

. . . . . . . .

Instances of atomic motionsReal world puppet show

12D DOF sequence

. . . .

Rendering

Smoothness Optimization

Manipulator Module

Creator Module

Figure 2: Illustration of the whole system framework.

The upper panel is the creator module and the lower

panel is the manipulator module.

nipulates these puppets in front of a Kinect depth sensor.Leite et al. [23] presented an anim-actor technique. Anim-actor is a real-time interactive puppets control system usinglow-cost motion capture based on body movements of non-expert artists. Zhang et al. [4] proposed a general frameworkfor controlling two shadow puppets, a human model and ananimal model. Cooper S. Yoo et al. [24] created a tangiblestage interface that can be used to control marionette with-out wires based on Kinect. Kinect camera can provide veryaccurate depth estimation which can greatly facilitate theanimation process, but also limits the system.

3. OVERVIEWIn this section, we give an overview of the shadow pup-

petry system. As shown in Figure 2, the whole system con-tains two modules: creator module and manipulator module.

The first part is creator module. Since the real pup-pet face usually contains a profile (including forehead, nose,mouth, etc) and a frontal view eye, we require users to inputone frontal view image and another profile view face image.First, we extract the eye and eyebrow from the frontal viewimage, and the puppet eye and eyebrow are deformed intothe frontal view eye and eyebrow shape. Then the profilecurve of the real profile face is extracted and deformed. Fi-nally, the texture is transferred from a certain sample puppetto our generated puppet.

The second part is manipulator module. The objective ofthis part is to create a system that takes in a user scriptand outputs an animation sequence. To build this system,we collect puppetry show videos and manually define atomicmotions as building blocks of the animation. Instances of theatomic motions are extracted from these videos. The scriptinput by the user is first interpreted into an atomic motionsequence, which is then synthesized by weighted mean of theinstances as later introduced.

4. MODULE I: PUPPETRY CREATORThe design of Chinese shadow puppetry figures follows

traditional aesthetics. There are mainly four kinds of pup-petry faces: Sheng (male roles), Dan (female roles), Jing(roles with painted faces) and Chou (comic characters). Theseroles are mostly from classical plays and appearances are for-

185

Page 4: eHeritage of shadow puppetry: creation and manipulationcolalab.org/media/paper/eHeritage_of_Shadow...21st century, shadow puppetry in China is on a steep and fast decline. Audiences

(a) Frontal face alignment (b) Profile face alignment

Figure 3: Aligned frontal and profile view faces.

malized. Since the procedures of puppetry making are verycomplicated and costly, it is hard for the user to create aspecific puppetry figure. In this section, we will introducethe creator part of our system in details.

4.1 Face AlignmentGiven a frontal view face image and a profile face image of

the same person, we utilize the frontal view eye & eyebrowand the profile curve to create the head of a user-specificpuppetry figure head.

Face Alignment: We process the face alignment of thefrontal face and profile face separately. For a frontal viewface, it is aligned by a commercial frontal face alignmentalgorithm 9. The alignment of a profile face is performedbased on the unified model proposed by Zhu et al. [25] forface detection, pose estimation, and landmark estimation inreal-world, cluttered images. We show the aligned resultsof frontal and profile view respectively in Figure 3. Afterface alignment, we get the locations of a set of keypointson face. Then based on these points, we extract the profilecurve from the profile face and rotate the curve for directionadjustment. Similarly, eye and eyebrow areas are extractedfrom the frontal image.

4.2 Eye & Eyebrow WarpingIn Chinese shadow puppetry, most of the figures’ faces are

in profile view. For describing features of characters, eyesand eyebrows are abstracted and exaggerated in an artisticway. For example, the eye of a puppet is stretched and theeyebrow is bent. The topology of the puppet eye is similarto the frontal view eye. To guarantee the similarity of theoutput figure face, given a target frontal view face image anda puppet face, we warp the puppet eye and eyebrow into thereal eye and eyebrow shape.

Most state-of-the-art warping methods [26, 27] assumethat the user provides explicit correspondences between theanchor points of the source and the target shape, and otherpoints are mapped according to their relative positions withrespect to the anchor points. Barycentric coordinates of tri-angles provide a convenient way to linearly interpolate thepoints within a triangle. Given a planar triangle [v1, v2, v3],any point v inside it has the unique Barycentric coordinates[w1, w2, w3] and:

w1v1 + w2v2 + w3v3w1 + w2 + w3

= v. (1)

The Barycentric coordinates are used as invariants to buildthe correspondences between two topologically equivalentpolygons, characterized by the vertexes [27].

In our case, we first extract the contour points of puppeteye C = {c1, · · ·, cn} by annotating landmarks and detectthe contour points of real eye P = {p1, · · ·, pm} automati-cally. In our setting, n = m = 10.

We consider puppet eye and real eye as two topologicallyequivalent polygons, characterized by C and P , and then

9http://www.omron.com/r d/coretech/vision/okao.html

(a)

(d) (c)

(b)

(e)

Figure 4: Eye & eyebrow warping. (a) Frontal face

image, (b) extracted eye and eyebrow region after face

alignment, (c) puppet face in our repository, (d) ex-

tracted eye and eyebrow of puppet, (e) eyes and eye-

brows regions after warping puppet’s eye and eyebrow

to the real face shape.

(a) Profile face (b) Aligned curve (c) Extracted skeleton

(e) Reference Profile curve (d) Reference puppet face

(f) Puppet profile curve

Figure 5: Profile curve generation. (a) The profile face

provided by the users, (b) the extracted profile curve

after rotated, (c) the skeleton of the profile curve, (d)

the reference puppet face, (e) the reference profile curve

skeleton and (f) the refined skeleton of profile curve after

applying re-com.

build the correspondences based on Baryceentric coordinatesbetween two eyes. The correspondences are used to warpreference puppet eye into the shape of real eye. Similarly,for eyebrow, we carry out the same process on puppet.

One example of eye and eyebrow warping is shown in Fig-ure 4. The original eye and eyebrow of the reference puppetin Figure 4(c) are warped to the shape of Audrey Hepburn’seye and eyebrow in Figure 4(b). The synthetic result isshown in Figure 4(e). The newly generated eye and eyebrowfuse the characteristics of Audrey Hepburn and the referencepuppet: Audrey Hepburn’s eyes are big and slightly turn up-wards, and the reference puppet’s eyes have obvious doubleeyelids.

4.3 Profile Curve GenerationCentral profile curve is an important geometric feature.

To generate an appropriate puppet profile, we will considertwo aspects: 1) this profile should look like the input faceand 2) this profile should have puppetry style. We keep themost unique parts of the profile curve and transform theminto one with puppetry style.

4.3.1 Profile AbstractionThe central profile curve is highly discriminative. Some

3D facial research works use central profile curve as a sig-nificant feature for face maching [28] and face recognition[29]. To obtain a discriminative and puppet-like person-alized puppet, we generate a profile curve which has theuniqueness of the input face and the characteristics of thereference puppet face. In order to estimate the uniqueness of

186

Page 5: eHeritage of shadow puppetry: creation and manipulationcolalab.org/media/paper/eHeritage_of_Shadow...21st century, shadow puppetry in China is on a steep and fast decline. Audiences

(a) Binary puppet profile curve (c) Profile curve after texture transform (b) Texture sample

Figure 6: Texture Transfer

a real profile face, we collect a 90 degree profile face datasetincluding 500 images from the well-known MultiPIE bench-mark [30].

Given a 90 degree profile face, we extract the edge of aprofile face by using extended Difference of Gaussian (DOG)algorithm [31] and based on the face alignment result, weobtain the profile curve. A profile curve is separated intofour parts: forehead, nose, mouth and jaw. We aim to pre-serve unique parts and replace others with the correspondingparts of the reference puppet, which are manually annotatedoffline.

First, we process this curve area by binary morphologytools and extract the binary skeleton line with only one pixelwidth [32] and then for each part, we extract histogram oforiented gradients (HOG) [33] feature as a vector. For eachpart, based on the MultiPIE dataset, we calculate the av-erage distance d, and the importance of the part from theinput face is measured by the ratio of d, which is the meanof its distances to the K-Nearest-neighbors (k = 3 in ourimplement), over d. The two most unique parts are keptand the other two parts are replaced by the correspondingparts of shadow puppetry profile.

From the profile image as shown in Figure 5(a), we get thedirection adjusted profile curve in Figure 5(b). The parts ingreen circles are the two most unique parts and retained asshown in Figure 5(c). The parts in gray circles are replacedwith the parts in red circles in Figure 5(e) extracted fromFigure 5(d). Finally, we recombine these selected parts andgenerate a puppet profile curve in skeleton form.

4.3.2 Profile Texture TransferAfter previous steps, we obtain a profile skeleton which

is like both the real face and the puppet face. Then wedilate this line till the same profile width of the referencepuppet as shown in Figure 6(a). To be more like a puppet,we transfer the leather texture to the profile area. Efroset al. [34] presented a simple image quilting algorithm ofgenerating novel visual appearance in which a new image issynthesized by stitching together small patches of existingimages. Given a piece of leather sample in Figure 6(b), thetexture for the profile curve can be synthesized by the imagequilting method [34]. In Figure 6(c), we show an exampleof texture transfer result.

4.4 Post ProcessingIn shadow puppetry play, there are a lot of vivid props,

including architecture, furniture, plants and animals, de-signed with distinct dynastic features. Some representativeexamples are shown in Figure 7(a). To make the roles moreartistically charming, puppets are decorated with exquisiteheaddresses, which are classified into male style and femalestyle. We can match the generated puppet with the properheaddresses, and some results are shown in Figure 7(b).

5. MODULE II: PUPPETRY MANIPULATORTraditional Chinese shadow puppet is created by hinging

together nine parts: head, upper body, left & right arm,

(a) Props in the play

(b) Different head dressing

Figure 7: Props in the puppetry and different head

dressing.

x

y

Head x, y positionRotation angle

Left/right shoulderRotation angle Right elbow

rotation angle

Right elbow rotation angle

Waist rotation angleprismatic joint

Front foot rotation angle

Rear foot rotation angle

Rotation about y axis

2x

2x

3x

1x

1x1x

1x

1x

12 degrees of freedom

(a) Degrees of freedom

Annotation Point

(b) Annotation pointfor DOF infering

Figure 8: Degrees of freedom and annotation points.

left & right hand, lower body, front & rear foot. Althoughthere are variants replacing lower body with two upper legs,the nine-part design dominates. In a real puppet show, eachpuppet is controlled with three stick manipulators. The ma-jor manipulator is attached to the neck, clipping together thehead and the upper body. Each of the other two manipu-lators is hinged to one hand. Based on this control model,shadow puppets have their own distinguished pattern of mo-tion. In this section, we describe a system that is able toanimate shadow puppets in their own styles learned fromreal world puppet shows.

5.1 Control ModelIn our system we adopt the nine-part puppet design, i.e.

the digital puppet consists of nine parts linked by 7 jointsand in total has 12 degrees of freedom (DOF), as depictedin Figure 8(a). The 7 joints are named according to theirpositions: left & right shoulder, left & right elbow, waist andfront & rear knee. All the 7 joints are revolute joints, exceptwaist, which has an additional prismatic joint along verticalaxis of the upper body. Therefore, among the 12 DOFs,seven describe the rotation angles at each of the joints; threeof them describe the x and y coordinates and the rotationangle of the head center (we choose head center as descriptorof puppet position because the major manipulator is locatednear the head); the rest two DOFs are the rotation angleabout y axis in the 3D scene which is used to flip the puppet,and the displacement of prismatic joint on the waist whichis for undulation of upper body.

187

Page 6: eHeritage of shadow puppetry: creation and manipulationcolalab.org/media/paper/eHeritage_of_Shadow...21st century, shadow puppetry in China is on a steep and fast decline. Audiences

The conformation of the puppet is controlled by assigningvalues to the 12 DOFs. Animation can thus be synthesizedby feeding a sequence of DOFs. The method to generateDOF sequence is described in the next sections.

5.2 Motion System5.2.1 Identification of Atomic Motions

Based on five videos of popular real world puppet show,i.e. “Tale of the White Snake Madam”, “Emperor Xuanzongof Tang”, “Yanfei Wonder Woman”, “Yang Saga”, “GreenDragon Sword”, it can be observed that the animation ofpuppets can be decomposed into smaller building blocks,which we call atomic motions. Eight atomic motions identi-fied from the videos are listed in Table 1.

5.2.2 Data Preparation and ProcessingAfter the atomic motions are identified, we manually ex-

tract video chunks that contain atomic motions from thepuppet shows. For each of the atomic motions, we extract50 video chunks which we call instances of that atomic mo-tion. Each frame in the video chunks are labeled with 12key points that describe the conformation of the puppet asshown in Figure 8(b). From the key points, we can eas-ily calculate the DOFs information for each frame. Thuseach instance of the atomic motion is finally converted to aninstance matrix, with each column being the DOFs of thecorresponding frame.

However, shaking defect is observed when the DOFs areapplied directly to our digital puppets, because the availablevideos are generally of low quality and the point labeled byhand will introduce a high frequency shaking defect, whichcompromises the annotation accuracy. To overcome this de-fect, we apply locally weighted scatterplot smoothing to therows of the DOF matrix. Later experiments show that thesmoothing improves the visual experience significantly.

After smoothing, the instances of the same atomic motionare then temporally realigned by linear interpolation. Forexample, the atomic motion A has n instances, each con-verted to a matrix Ai. The Ais are linearly interpolated tohave the same column number as the matrix with the mostcolumns, denoted by A′is. We consider the weighted meanof the aligned instances as eligible variants of that atomicmotion:

Anew = A′1w1 +A′2w2 + · · ·+A′nwn (2)

n∑i=1

wi = 1, wi > 0.

Unlimited variants of the atomic motion can be generatedwhich form a rich set of ingredients for building complexanimations.

5.2.3 Recombination of Atomic MotionsAtomic motions captured from the videos have limited

types. One way to increase their number is through re-combination. Each atomic motion can be decomposed intoindependent parts. For example, the upper body motion isindependent of the lower body motion. We can thus com-bine upper body motion of “bow” with lower body motionof “kneel” to form a new atomic motion “kowtow”.

5.2.4 Composite MotionsWhile atomic motion provides high flexibility for anima-

tion synthesis, composite motion offers convenience. Com-posite motion consists of sequential atomic motions. It is de-fined for easier usage of the scripting interface. For instance,

Table 1: Atomic motions

Atomic

MotionMeaning

jump the whole body move forward for some dis-

tance, which represents walking in shadow

puppet show

bow To lower the head or upper body as a social

gesture

sit Sitting on the chairs in the scene

kneel Kneel down on the ground

point A hand gesture frequently appearing during

speech, indicates the puppet is talking.

swing The forward and backward motion of hands

breathe Undulation of upper body, in puppet show,

this motion indicates the puppet is talking

turn Flipping the puppet about the y axis of the

scene, changing its facing direction

composite motion“talk for 10 minutes”means“breathe”con-tinuously and“point”sometimes for 10 minutes. In this case,it will take much more efforts if we input atomic motions oneby one. The scripting system will handle the conversion ofcomposite motions to atomic motions. Composite motion isan addition to atomic motion which gives users higher levelcontrol of the puppet.

5.3 Animation GenerationThe animation of puppets can be decomposed to smallest

building blocks called atomic motions. Based on atomic mo-tions, composite motions are synthesized as larger buildingblocks. In this work we render the animation using browserbased webgl for ease of sharing with others, each puppet hasa corresponding javascript object. Through the scripting in-terface, manipulators of the digital puppet invokes differentfunctions on the object. For each atomic or composite mo-tion, we define a corresponding function. The manipulatorcan easily control the puppet by invoking the function withparameters like time duration and spatial distance. For ex-ample, puppet.walk(10,15) tells puppet to walk 10 units inthe defined 2D world in 15 frames’ time. Through script,the manipulator can also place props like chair to specificlocations, the locations of the props can then be used in thescript. The manipulator can either do scripting on-line byinputing one motion after the other or off-line by writingdone all the scripts and feed to the puppet. The on-linesystem is mainly for the user to experience the individualmotions. For an entire animation sequence generation, off-line is preferred because it offers better control of timing. Inthe rest part of this section, we will present the method foroff-line animation generation.

The off-line animation generation system takes in a se-quence of atomic & composite motions as input from a script-ing interface. A preprocessing step on the input motionsequence decomposes the composite motions into atomicmotions, which makes the motion sequence monotonouslyatomic. The DOF sequence is finally created by convert-ing each atomic motion into DOFs. As mentioned in Sec-tion 5.2.2, atomic motions are generated by weighted meansof their instances. Reusing a limited number of the instancesmakes the animation monotonous; by taking different weight

188

Page 7: eHeritage of shadow puppetry: creation and manipulationcolalab.org/media/paper/eHeritage_of_Shadow...21st century, shadow puppetry in China is on a steep and fast decline. Audiences

vector, infinite variants of one atomic motion can be gener-ated from its limited instances. The aesthetic of the ani-mation is greatly influenced by the smoothness of the ani-mation. The visual experience is discounted if two adjacentframes differ a lot. Thus, minimizing the position gap be-tween atomic motions is the criterion in selecting the weightvectors.

5.3.1 Weights SelectionThe smoothness of the animation sequence can be mea-

sured by the position gap between consecutive atomic mo-tions, namely the distance from the last frame of one atomicmotion to the first frame of the next. Smoothing the wholemotion sequence can be achieved by optimizing the followingproblem:

min{wi}

s({wi}) =

N−1∑i=1

‖Fi+1wi+1 − Liwi‖22 (3)

s.t.∑j

wij = 1, wij ≥ 0, i ∈ {1, 2, · · · , N}.

For better explanation, we introduce the instance matrixMkj , which means the jth instance of the atomic motion k.

k is in the set of atomic motions listed in Table 1. k(i)denotes the atomic motion type of the ith atomic motionin the motion sequence. Fi is formed by taking the first

columns of Mk(i)j for each j. Li is formed by taking the last

columns. wi is the weight vector for the ith atomic motion.For an given atomic motion, the weights are constrained tobe non-negative, and sum to one.

Due to the imperfectness of instances realignment, weightedrecombination of the instances has shaking defect if toomany instances are combined together. To reduce shakingdefects, we impose a sparsity constraint on the weight vec-tors, so that a minimum number of instances are selected.The following optimization problem is formed:

min{wi}

s({wi}) + λ

N∑i=1

‖wi‖0 (4)

s.t.∑j

wij = 1, wij ≥ 0, i ∈ {1, 2, · · · , N}.

Generally, the l0-norm is relaxed into l1-norm to solve forcomputational efficiency. However, the l1-norm heuristicdoes not work here because the constraint makes l1-normof the weight vector equal to 1, namely constant.

We utilize another relaxation method to handle this spar-sity on probability simplex problem proposed in [35]. Thel0-norm can be lower bounded by reciprocal of infinity normwhen the corresponding l1-norm is a constant. For any xconstraint to probability simplex:

‖x‖1 =

n∑i=1

|xi| ≤ ‖x‖0 maxi|xi| ≤ ‖x‖0‖x‖∞ (5)

Because ‖x‖1 = 1,

1

‖x‖∞≤ ‖x‖0. (6)

Then we resort to solve the following problem:

min{wi}

s({wi}) + λ

N∑i=1

1

‖wi‖∞(7)

0

3

6

9

12

15

18

excellent good ordinary weak/poor

Aesthetic

0

4

8

12

16

20

excellent good ordinary weak/poor

Puppetry-alike

Figure 9: Evaluation of our creator results from the 30

subjects. Results of aesthetic and puppetry-alike level of

assessment are shown sequentially. The horizontal axis

corresponds to the different evaluation levels, the vertical

axis is the number of persons.

We show here a single variable version of this problem:

minxs(x) + λ

1

‖x‖∞

= minxs(x) + min

i

λ

xi

= mini

minxs(x) +

λ

xi.

(8)

minx s(x)+ λxi

is convex optimization if s(x) is convex, sinceλxi

is a convex function when xi > 0. The convex optimiza-

tion is done for each i ∈ {1, 2, · · · , N} using exponentiatedgradient descent [36]. By solving the n convex problems (nis the length of vector x), the single variable version of thisproblem can be globally optimized.

Equivalently, (4) is converted to:

mink1· · ·min

kNmin

wi,i∈{1,2,··· ,N}s{wi}+

λ

w1k1

+ · · ·+ λ

wNkN(9)

Each of the convex problem can be solved by alternativelyapplying exponentiated gradient to each of the variables.

However, the number of convex problems grows exponen-tially as the variable number increases. In our case, thevariable number is N , which is the number of atomic mo-tions in the motion sequence. Thus it is not practical tosolve the weights for the whole motion sequence in one shot.Therefore, we employ a scanning window method: each timewe only optimize a segment of the motion sequence. In thisway the complexity is linear to the sequence length.

After the weights are obtained by optimization describedabove, we calculate the whole DOF sequence by using Equa-tion 2 for each atomic motion.

6. EXPERIMENTSIn this section, we first evaluate the performance of the

creator module and manipulator module sequentially. Thenwe comprehensively show the results of the our system ina so-called “love story with puppetry” scenario to show thecombination effectiveness of creator results and manipulatorresults.

For the user studies in this section, 30 subjects (7 fe-males and 23 males) ranged from 22 to 40 years old (µ=27.3,σ=3.9) are invited to participate in our experiments.

6.1 Results on Creator Module6.1.1 Quantitative Results: User Study

Here we discuss the properties of the creator puppet inthe following three aspects: aesthetic, puppetry-alike anddiscrimination.

Aesthetic: Whether the result looks elegant?

189

Page 8: eHeritage of shadow puppetry: creation and manipulationcolalab.org/media/paper/eHeritage_of_Shadow...21st century, shadow puppetry in China is on a steep and fast decline. Audiences

Could you recognize the people from the generated face? Please select the one you think is most similar to the puppet, and evaluate the similarity.

Puppet Real faces

Figure 10: The exemplar user interface of our discrim-

inative evaluate tool.

Table 2: ANOVA analysis of comparing smoothed and

original atomic motions

F-statistics p-value1025.49 5.02281e-121

Puppetry-alike: Does the generated face keep the char-acteristics of puppet?

In these two questions, we set five values for subjectsto choose: excellent, good, ordinary, weak and poor. Wepresent 10 creator puppet images to subjects and the resultsare shown in Figure 9. We can see that our puppet modulehas achieved very high score in both evaluation metrics.

Discrimination: Could you recognize the people fromthe generated face? We evaluate this index by several single-choice questions. For example, we show a puppetry of EmmaWatson to the subjects, in the same time, we show four faceimages: one is Emma Waston and the other three are ran-dom choices. An example of user study interface is shownin Figure 10. The subjects choose which face image amongthe four candidates most looks like the puppet. We test 10creator results and summarize the subjects’ feedback. Theaverage precision is 71.1%. The precision is not quite highbecause the generated puppet images are all in profile view,but the candidates are in frontal view. However, the preci-sion of 71.1% is much higher than random guess (25%).

6.1.2 Qualitative ResultsHere we show some result examples of the creator module.

As shown in Figure 11, the first column are input frontalview face images, the second column are input profile faceimages of the same person. The creator results are shown inthe third column and results decorated with headdress areshown in the forth column.

6.2 Results on Manipulator Module6.2.1 Atomic Motion

Here we evaluate the result of applying locally weightedscatterplot smoothing to the instance matrices of atomic mo-tions. For each type of atomic motion, one instance is ran-domly selected and smoothed. The subjects are presentedboth the original and smoothed version of the atomic mo-tion instances. For each original and smoothed pair of theinstances, the subjects are required to evaluate their smooth-ness on the 5-point Likert scale without prior knowledgewhich one is smoothed. One randomly selected instance forall eight types of atomic motions are evaluated. We showthe notched box plot of the result in Figure 12(a). One wayANOVA analysis is also performed on the data shown in Ta-ble 2. The results show that locally weighted scatterplot cansignificantly improve the smoothness of the atomic motions.

6.2.2 Animation Smoothness EvaluationThe smoothness of generated animation is not only de-

pendent on the smoothness of atomic motion itself, but also

smoothed original1

2

3

4

5

smoothed original1

2

3

4

5

(a) Evaluation of atomic motion smoothness (b) Evaluation of transition smoothness

Figure 12: Notched box plot of smoothness evaluation.

(a) random

last frame of jump�rst frame of bow

(b) smooth

Figure 13: Smoothing resulton the transition of adjacent atomic motions. To evalu-ate the weight selection method, we generated five anima-tion sequences each about 30s in length consisting of ran-dom atomic motions. Two versions of the animations aregenerated, one with weights selected using our method de-scribed in Section 5.3.1 and the other with randomly se-lected weights. Subjects are given the same options as inthe evaluation of atomic motion smoothness. Figure 13(a)and Figure 13(b) compare the transitions selected from theanimation sequences generated with smoothness optimizedweights and random weights. It can be observed the po-sition gap between consecutive atomic motions is smallerwhen smoothness optimized weight is used.

The notched box plot of the result is shown in Figure 12(b),and the one way ANOVA analysis is shown in Table 3. Theresults show that optimized weights using our method issignificantly better in transition smoothness than randomlyselected weights.6.2.3 Short Video Evaluation

Two video chunks about one minute in length is selectedfrom the puppetries “Tale of the White Snake Madam” and“Green Dragon Sword”. Using the scripting interface, we re-produced those two videos. The subjects in this study arerequired to watch the original and reproduced videos, thenanswer questions regarding three aspects of the synthesizedvideos. The questions are answered on a 5-point Likert scalefrom “very bad” (1) to “very good” (5). The mean and stan-dard deviation of the scores are calculated and displayed inTable 4. From the results, conclusion can be made that thepuppetries generated from our system preserve the motionpattern of real puppetry, and successfully convey the infor-mation to the subjects. The subjects are satisfied with theresolution of the generated puppetry, which is expected be-cause videos captured from real puppetries are limited inthe capturing condition and are usually low in quality.

6.3 Love Story with PuppetryTo demonstrate the usage of both the creator and manip-

ulator module, we make a short video that tell a love story

Table 3: ANOVA analysis of comparing animation se-

quence generated using weights from smoothness opti-

mization and randomly selection

F-statistics p-value327.53 6.60827e-50

190

Page 9: eHeritage of shadow puppetry: creation and manipulationcolalab.org/media/paper/eHeritage_of_Shadow...21st century, shadow puppetry in China is on a steep and fast decline. Audiences

(a) (b) (c) (d) (e) (f) (g) (h)

Figure 11: Examples of creators module. The first two columns correspond to the frontal and profile images provided

by users. The third column is the generated puppet, and the last column is the puppet after adding head dress.

Table 4: Evaluation of short video as compared to the

original video

Question Mean Stdev

Q1. Does the generated video preserve

the motion pattern of real puppet?

4.3333 0.6609

Q2. How is the information conveyed? 4.2333 0.6789

Q3. How is the resolution? 4.6667 0.6065

Table 5: The evaluate of the “love story with puppetry”

Question Mean Stdev

Q1: Does the video convey information?

Could you understand the whole story by

watching the video?

4.14 0.9438

Q2: Do you like the style of interaction

between physical space and virtual space?4.00 0.8459

Q3: Can you recognize the person gener-

ated by our puppet creator module?4.01 1.0184

Q4: Does the generated puppet preserve

the traditional puppet style?4.37 0.6897

Q5: Does the generated action pattern pre-

serve the traditional puppet style?4.31 0.6311

Q6: Through the practical operation with

the manipulator module, do you think it is

convenient to use it?

4.27 0.9131

Q7: Do you think our work can help pre-

serve the traditional Chinese shadow pup-

pet culture?

4.35 0.9632

Q8: Do you want to see similar product in

the puppet museum?4.60 0.6547

between a man and a female puppet. The animations of thepuppets in this video are generated using our manipulatormodule. The man in this story finally becomes a puppet,which is generated by our creator module.

Story Synopsis: A boy named Ryan was a PhD student.He was sad because he was still single. One day, a miraclehappened, a very beautiful puppet girl appeared and said tohim: “Would you come to my world?” So he was changedinto a puppet and entered the girl’s world. They got marriedand lived happily ever since. Some example frames are dis-played in Figure 14. The full video can be watched online 10

and also included in the supplementary materials.

10http://www.youtube.com/watch?v=lxJUw3mKF18yA

We designed 8 questions and asked all the subjects towatch and evaluate the video on a 5-point Likert scale: verygood, good, normal, bad and very bad from score 5 to score1. Besides watching the video, the subjects also tried themanipulator module by typing the script themselves andexperiencing the play creation. Then we count the averagescore of each question, and the results are shown in Table 5.Q1 and Q2 are related to the user friendliness of our system.Q3 and Q4 are related to the performance of the creatormodule while Q5 and Q6 are related to the performanceof the manipulator module. Q7 and Q8 are related to ourinitial motivation.

From the results we can conclude that most subjects agreethat both the generated puppet (mean score 4.37) and itsaction pattern (mean score 4.31) can well preserve the tradi-tional puppet style. And most subjects think our work canhelp preserve the traditional Chinese shadow puppet culture(mean score 4.27), and want to see similar product in thepuppet museum (quite high score 4.60). Due to the limita-tion of the profile face, people cannot always recognize thepuppet generated by our creator module (score 4.01). Weleave refining this part as our future work. The subjects alsoprovided some suggestions. One subject commented, “Theidea is quite good, and I look forward to more wonderfulpuppetry videos”. Some of the subjects thought that moreatomic actions should be added. One subject thought weshould enhance the soundtracks of the puppet.

7. CONCLUSIONS AND FUTURE WORKIn this paper, we proposed the eHeritage of shadow pup-

petry, including a creator module and a manipulator mod-ule. The creator module can generate a puppet for one per-son based on his/her frontal view face image and profile faceimage. The manipulator module can automatically gener-ate the motion sequences based on the script provided bythe user. We conduct extensive experiments on puppetrycreator and manipulator module, and the results show theeffectiveness of the proposed system. Currently our systemmainly focuses on the visual effect of the puppetry. In fu-ture, we would like to take more consideration to the audioparts, such as automatic vocal style transfer from humansinging to puppetry style singing.

8. ACKNOWLEDGMENTThis research is partially supported by the Singapore Na-

tional Research Foundation under its International Research

191

Page 10: eHeritage of shadow puppetry: creation and manipulationcolalab.org/media/paper/eHeritage_of_Shadow...21st century, shadow puppetry in China is on a steep and fast decline. Audiences

1 2 3

4 5 6

# Script Puppet.walk(distance=10,duration=15) …… Puppet.talk(duration=10) …… Puppet.point(duration=15) …… Puppet.sit(duration=20) …… Puppet.point(duration=18) …… Puppet.kneel(duration=20) Puppet.kowtow(duration=40) ……

Figure 14: Some representative frames of the “love story with puppetry”.

Centre @Singapore Funding Initiative and administered bythe IDM Programme Office; and also partially supported byState Key Development Program of Basic Research of China2013CB336500, in part by the Natural Science Foundationof China (NSFC) under grant 61172164.

9. REFERENCES[1] F. Chen, “Visions for the masses: Chinese shadow plays from

shaanxi and shanxi,” Pacific Science, 2012.

[2] L. Leite and V. Orvalho, “Shape your body: control a virtualsilhouette using body motion,” in Proc. ACM AnnualConference Extended Abstracts on Human Factors inComputing Systems Extended Abstracts, 2012.

[3] S. Y. Lin, C. K. Shie, S. C. Chen, and Y. P. Hung, “Actionrecognition for human-marionette interaction,” in Proc. ACMInternational Conference on Multimedia, 2012.

[4] H. Zhang, Y. Song, Z. Chen, J. Cai, and K. Lu, “Chineseshadow puppetry with an interactive interface using the kinectsensor,” in Proc. the 12th European Conference on ComputerVision Workshop, 2012.

[5] A. Paviotti and D. A. Forsyth, “A lightness recovery algorithmfor the multispectral acquisition of frescoed environments,” inIEEE 12th International Conference on Computer VisionWorkshops, 2009.

[6] L. Wolf, R. Littman, N. Mayer, N. Dershowitz, R. Shweka, andY. Choueka, “Automatically identifying join candidates in thecairo genizah,” in IEEE 12th International Conference onComputer Vision Workshops, 2009.

[7] T. Luo, R. Li, and H. Zha, “3d line drawing for archaeologicalillustration,” International Journal of Computer Vision, 2011.

[8] M. Seidl, M. Zeppelzauer, and C. Breiteneder, “A study ofgradual transition detection in historic film material,” in Proc.the second workshop on eHeritage and digital artpreservation, 2010.

[9] A. Mallik, S. Chaudhury, and H. Ghosh, “Preservation ofintangible heritage: a case-study of indian classical dance,” inProc. the second workshop on eHeritage and digital artpreservation, 2010.

[10] B. Gooch and A. Gooch, Non-photorealistic rendering, 2001.

[11] Z. Ruttkay and H. Noot, “Animated chartoon faces,” in Proc.the 1st International Symposium on Non-photorealisticanimation and rendering, 2000.

[12] H. Chen, N. N. Zheng, L. Liang, Y. Li, Y. Q. Xu, and H. Y.Shum, “Pictoon: a personalized image-based cartoon system,”in Proc. the 10th ACM International Conference onMultimedia, 2002.

[13] J. Xu, C. S. Kaplan, and X. Mi, “Computer-generatedpapercutting,” in 15th Pacific Conference on ComputerGraphics and Applications, 2007.

[14] M. Meng, M. Zhao, and S. C. Zhu, “Artistic paper-cut ofhuman portraits,” in Proc. the International Conference onMultimedia, 2010.

[15] A. Mazalek, M. Nitsche, C. Rebola, A. Wu, P. Clifton, F. Peer,and M. Drake, “Pictures at an exhibition: a physical/digitalpuppetry performance piece,” in Proceedings of the 8th ACMconference on Creativity and cognition, 2011.

[16] S. W. Hsu and T. Y. Li, “Planning character motions forshadow play animations,” Proc. the International Conferenceon Computer Animation and Social Agents, 2005.

[17] C. Barnes, D. E. Jacobs, J. Sanders, D. B. Goldman,S. Rusinkiewicz, A. Finkelstein, and M. Agrawala, “Video

puppetry: a performative interface for cutout animation,” inACM Transactions on Graphics, 2008.

[18] K. Tan, A. Talib, and M. Osman, “Real-time simulation andinteractive animation of shadow play puppets using opengl,”International Journal of Computer and InformationEngineering, 2010.

[19] D. H. Kim, M. Y. Sung, J.-S. Park, K. Jun, and S.-R. Lee,“Realtime control for motion creation of 3d avatars,” inAdvances in Multimedia Information Processing-PCM 2005,2005.

[20] F. Lu, F. Tian, Y. Jiang, X. Cao, W. Luo, G. Li, X. Zhang,G. Dai, and H. Wang, “Shadowstory: creative and collaborativedigital storytelling inspired by cultural heritage,” in Proc. the2011 Annual Conference on Human Factors in ComputingSystems, 2011.

[21] J. Pan and J. Zhang, “Sketch-based skeleton-driven 2danimation and motion capture,” Transactions on edutainmentVI, 2011.

[22] R. Held, A. Gupta, B. Curless, and M. Agrawala, “3d puppetry:a kinect-based interface for 3d animation,” in Proc. the 25thAnnual ACM Symposium on User Interface Software andTechnology, 2012.

[23] L. Leite and V. Orvalho, “Anim-actor: understanding interwith digital puppetry using low-cost motion capture,” in Proc.the 8th International Conference on Advances in ComputerEntertainment Technology, 2011.

[24] M. D. G. Cooper Sanghyun Yoo, “Digital puppet: a tangible,interactive stage interface,” in Proceedings of the SIGCHIConference on Human Factors in Computing Systems, 2011.

[25] X. Zhu and D. Ramanan, “Face detection, pose estimation, andlandmark localization in the wild,” in IEEE Conf. ComputerVision and Pattern Recognition, 2012.

[26] J. Gomes, Warping and morphing of graphical objects.Morgan Kaufmann, 1999.

[27] K. Hormann and M. S. Floater, “Mean value coordinates forarbitrary planar polygons,” ACM Transactions on Graphics,2006.

[28] G. Pan, Y. Wu, Z. Wu, and W. Liu, “3d face recognition byprofile and surface matching,” in Proc. the International JointConference on Neural Networks, 2003.

[29] T. Nagamine, T. Uemura, and I. Masuda, “3d facial imageanalysis for human identification,” in Proc. 11th IAPRInternational Conference on Pattern Recognition, ConferenceA: Computer Vision and Applications, 1992.

[30] R. Gross, I. Matthews, J. Cohn, T. Kanade, and S. Baker,“Multi-pie,” Image and Vision Computing, 2010.

[31] H. Winnemoller, “Xdog: advanced image stylization withextended difference-of-gaussians,” in Proc.the ACMSIGGRAPH/Eurographics Symposium on Non-PhotorealisticAnimation and Rendering, 2011.

[32] J. Serra, Image analysis and mathematical morphology, 1982.

[33] N. Dalal and B. Triggs, “Histograms of oriented gradients forhuman detection,” in IEEE Conf. Computer Vision andPattern Recognition, 2005.

[34] A. A. Efros and W. T. Freeman, “Image quilting for texturesynthesis and transfer,” in Proc. the 28th Annual Conferenceon Computer Graphics and Interactive Techniques, 2001.

[35] M. Pilanci, L. E. Ghaoui, and V. Chandrasekaran, “Recovery ofsparse probability measures via convex programming,” inAdvances in Neural Information Processing Systems 25, 2012.

[36] J. Kivinen and M. K. Warmuth, “Exponentiated gradientversus gradient descent for linear predictors,” Information andComputation, 1997.

192