editing and synthesizing two-character motions using...

10
Pacific Graphics 2014 J. Keyser, Y. J. Kim, and P. Wonka (Guest Editors) Volume 33 (2014), Number 7 Editing and Synthesizing Two-Character Motions using a Coupled Inverted Pendulum Model J. Hwang 1 I.H. Suh 2 and T. Kwon 3 1 Department of Computer and Software, Hanyang University 2 Department of Eletronics and Computer Engineering, Hanyang University 3 Department of Computer and Software, Hanyang University Abstract This study aims to develop a controller for use in the online simulation of two interacting characters. This con- troller is capable of generalizing two sets of interaction motions of the two characters based on the relationships between the characters. The controller can exhibit similar motions to a captured human motion while reacting in a natural way to the opponent character in real time. To achieve this, we propose a new type of physical model called a coupled inverted pendulum on carts that comprises two inverted pendulum on a cart models, one for each individual, which are coupled by a relationship model. The proposed framework is divided into two steps: motion analysis and motion synthesis. Motion analysis is an offline preprocessing step, which optimizes the con- trol parameters to move the proposed model along a motion capture trajectory of two interacting humans. The optimization procedure generates a coupled pendulum trajectory which represents the relationship between two characters for each frame, and is used as a reference in the synthesis step. In the motion synthesis step, a new cou- pled pendulum trajectory is planned reflecting the effects of the physical interaction, and the captured reference motions are edited based on the planned trajectory produced by the coupled pendulum trajectory generator. To validate the proposed framework, we used a motion capture data set showing two people performing kickboxing. The proposed controller is able to generalize the behaviors of two humans to different situations such as different speeds and turning speeds in a realistic way in real time. Categories and Subject Descriptors (according to ACM CCS): I.3.5 [Computer Graphics]: Physically based modeling—I.3.6 [Computer Graphics]: Interaction techniques—I.3.7 [Computer Graphics]: Three-Dimensnoal Graphics and Realism—Animation 1. Introduction The framework proposed in this study facilitates online edit- ing and synthesis of the interactive motions between two characters, while maintaining the naturalness of their mo- tions. Interactions between two characters occur often in character animations and computer games in various sce- narios such as boxing, dancing, and so on. In the field of computer animation, many researchers have modeled two- character interactions using motion capture data [HCKL13, KCPS08]. However, if the characters encounter a new sit- uation because of the user control or changes in the envi- ronment, unrealistic motions may be generated. Because the naturalness of character motion is often inversely propor- tional to the levels of user control [MP07], it is even more difficult to generate the convincing motions of two interact- ing characters that adapt to user-controls while maintaining continuous interaction with the opponent character. Insuffi- cient consideration of the physical interaction between the characters such as force level mismatch, timing errors, and angular distortions can lead to perceptible anomalies on the realism of two-character interactions [HMO12]. In this paper, we focus on two factors that need to be ad- dressed to ensure natural two-character interactions: spatio- temporal consistency and physical consistency. Spatio- temporal consistency refers to the maintenance of a time- varying spatial relationship, especially a certain distance and an angle between two characters that need to be maintained for each frame. For example, when character 1 approaches c 2014 The Author(s) Computer Graphics Forum c 2014 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

Upload: hoangcong

Post on 06-Mar-2018

223 views

Category:

Documents


2 download

TRANSCRIPT

Pacific Graphics 2014J. Keyser, Y. J. Kim, and P. Wonka(Guest Editors)

Volume 33 (2014), Number 7

Editing and Synthesizing Two-Character Motions using aCoupled Inverted Pendulum Model

J. Hwang1 I.H. Suh2 and T. Kwon3

1Department of Computer and Software, Hanyang University2Department of Eletronics and Computer Engineering, Hanyang University

3Department of Computer and Software, Hanyang University

AbstractThis study aims to develop a controller for use in the online simulation of two interacting characters. This con-troller is capable of generalizing two sets of interaction motions of the two characters based on the relationshipsbetween the characters. The controller can exhibit similar motions to a captured human motion while reacting ina natural way to the opponent character in real time. To achieve this, we propose a new type of physical modelcalled a coupled inverted pendulum on carts that comprises two inverted pendulum on a cart models, one foreach individual, which are coupled by a relationship model. The proposed framework is divided into two steps:motion analysis and motion synthesis. Motion analysis is an offline preprocessing step, which optimizes the con-trol parameters to move the proposed model along a motion capture trajectory of two interacting humans. Theoptimization procedure generates a coupled pendulum trajectory which represents the relationship between twocharacters for each frame, and is used as a reference in the synthesis step. In the motion synthesis step, a new cou-pled pendulum trajectory is planned reflecting the effects of the physical interaction, and the captured referencemotions are edited based on the planned trajectory produced by the coupled pendulum trajectory generator. Tovalidate the proposed framework, we used a motion capture data set showing two people performing kickboxing.The proposed controller is able to generalize the behaviors of two humans to different situations such as differentspeeds and turning speeds in a realistic way in real time.

Categories and Subject Descriptors (according to ACM CCS): I.3.5 [Computer Graphics]: Physically basedmodeling—I.3.6 [Computer Graphics]: Interaction techniques—I.3.7 [Computer Graphics]: Three-DimensnoalGraphics and Realism—Animation

1. Introduction

The framework proposed in this study facilitates online edit-ing and synthesis of the interactive motions between twocharacters, while maintaining the naturalness of their mo-tions. Interactions between two characters occur often incharacter animations and computer games in various sce-narios such as boxing, dancing, and so on. In the field ofcomputer animation, many researchers have modeled two-character interactions using motion capture data [HCKL13,KCPS08]. However, if the characters encounter a new sit-uation because of the user control or changes in the envi-ronment, unrealistic motions may be generated. Because thenaturalness of character motion is often inversely propor-tional to the levels of user control [MP07], it is even more

difficult to generate the convincing motions of two interact-ing characters that adapt to user-controls while maintainingcontinuous interaction with the opponent character. Insuffi-cient consideration of the physical interaction between thecharacters such as force level mismatch, timing errors, andangular distortions can lead to perceptible anomalies on therealism of two-character interactions [HMO12].

In this paper, we focus on two factors that need to be ad-dressed to ensure natural two-character interactions: spatio-temporal consistency and physical consistency. Spatio-temporal consistency refers to the maintenance of a time-varying spatial relationship, especially a certain distance andan angle between two characters that need to be maintainedfor each frame. For example, when character 1 approaches

c© 2014 The Author(s)Computer Graphics Forum c© 2014 The Eurographics Association and JohnWiley & Sons Ltd. Published by John Wiley & Sons Ltd.

J. Hwang, I.H. Suh, T. Kwon / Editing and Synthesizing Two-Character Motions using a Coupled Inverted Pendulum Model

Kinematic mapping

Synthesis(online)

Coupled-Pendulum

Trajectory Generator

Reference coupled

pendulum trajectory

Interaction planning

Analysis(preprocessing)

Reference motion

Coupled-Pendulum

Trajectory Generator

Reference coupled

pendulum trajectory

Figure 1: Conceptual diagram: the proposed frameworkcomprises two steps: motion analysis and motion synthesis.The first step is an offline preprocessing procedure, whereasthe second step is conducted online.

character 2 (an opponent) rapidly, character 2 must movebackward as quickly as character 1 to maintain their spatialrelationship. Physical consistency aims to edit motion of acharacter based on considerations of the balancing behaviorcontinuously occurring while the two characters interact. Forexample, if character 1 directs a forward-kick at character 2,the motion of character 2, when they are hit by character1, should be edited to produce an appropriate reaction, forexample, through body leaning and backward stepping. Tofacilitate the editing of natural interactive character motions,each motion should adapt not only to motion types but alsoto subtle variations in interaction intensity and still be con-trolled by the user.

To tackle this problem, we devise a new physical modelbased on an inverted pendulum on a cart (IPC) model. Inphysics-based animation, simplified physical models such asinverted pendulums have been widely used to represent thecharacter motion according to physical rules while adaptingto a user-specified context. Our work is based on the idea ofthe preview control from an inverted pendulum proposed byKajita, Sugihara and their colleagues [KNK∗04, Sug08]. Asimilar idea was adopted by Kwon and Hodgins [KH10] toproduce a real-time controller which is capable of producingvarious running motions from a single captured referencemotion. To represent the interactions between two charac-ters, we devise a new model which we call coupled invertedpendulums on carts (CIPC) by extending the IPC model. Thenew simplified model (CIPC) comprises two IPCs of indi-vidual characters and a model of the spatial relationship be-tween them. The main characteristic of this model is that ifa user controls the movement of one IPC, the other IPC alsofollows the control to maintain the relative position, therebyreproducing the continuous interactions between two peo-ple (Figure 2). Each pendulum ensures that the motion ofeach individual character is physically plausible. The result-ing movement of the CIPC exhibits a similar motion patternobserved from captured reference motions of two humansinteracting with each other.

The following sections explain the proposed frameworkand model. In section 2, previous works related to our sys-tem is surveyed. In section 3, we provide an overview of thesystem. In section 4, the proposed physical models are firstexplained, and then how to map the captured human mo-tions to the proposed model is explained. In section 5, weexplain how to edit character motions using the model pa-rameters acquired in the motion analysis step. In section 6,we present simulation results to demonstrate the capabilityof the proposed framework.

2. Related Work

2.1. Multi-character interactions

Many approaches have been proposed to synthesize believ-able motions involving two or more characters. Some fo-cused on how to collect and interactively select motion seg-ments from captured reference motions depending on theinteraction the character is currently involved in. Kim etal. [KPS03] extracted rhythmic patterns from motion cap-ture data and used a movement transition graph to generateanimations of characters dancing in a ballroom and march-ing in accordance with musical rhythms. Lai et al. [LCF05]proposed a group motion graph for data-driven animationof groups of discrete agents, such as flocks, herds, or smallcrowds. Lee and Lee [LL06] used a reinforcement learningapproach to animate a boxer based on motion capture datawith minimal runtime costs. Kwon et al. [KCPS08] proposeda system for synthesizing novel motions of standing-up mar-tial arts such as kickboxing and taekwondo from capturedmotions of two interacting subjects. Two-character interac-tions have also been studied using game theory, particularlyadversarial cases where each character has their own turn.Shum et al. [SKY07,SKY12] proposed methods to generatethe best interactive motion adapting to the opponent usinga min-max approach. The min-max approach has the weak-ness that the same interaction pattern is repeatedly generatedin the same context. To overcome this problem, Wampleret al. [WAH∗10] proposed an online method for animat-ing two-character interactions in adversarial games based ona Markov model. Shum et al. [SKSY08] proposed a data-driven approach to automatically generate a scene wheretens to hundreds of characters densely interact with eachother. The close interactions between characters are pre-computed by expanding a game tree, and these are storedas data structures called interaction patches. Although ourcurrent action selection mechanism is based on [KCPS08],other similar methods can also be adopted in our framework.Our work complements these approaches by improving thegeneralization capability of a given motion segment.

Other researchers have focused on the spatial relation-ships between body parts and objects for capturing the se-mantics in interaction scenes. Ho and Komura proposed atopology-based approach to encode the bodies of two hu-man characters that are tangled based on the concept of ra-tional tangles. The idea of tangles was used to retrieve the

c© 2014 The Author(s)Computer Graphics Forum c© 2014 The Eurographics Association and John Wiley & Sons Ltd.

J. Hwang, I.H. Suh, T. Kwon / Editing and Synthesizing Two-Character Motions using a Coupled Inverted Pendulum Model

motions of two human characters in close contact, [HK09b],edit the motions, [HK09a], and synthesize close interactivemotions such as wrestling, [HK11]. Kwon et al. [KLLT08]proposed an editing system for multiple characters using aLaplacian mesh editing method. This method is later gen-eralized to deformable motion patches which describe anepisode of multiple interacting characters, [HKHL13]. In ad-dition, Ho et al. [HKT10] proposed the use of an interactionmesh for editing and retargeting motions that involve closeinteractions between body parts of single or multiple char-acters, such as dancing, wrestling. Later, they demonstratedthe usefulness of the interaction mesh in synthesizing awide variety of motions with close interactions, [HCKL13].Shum et al. [SKY12] proposed a system for generating two-character interactions by constructing a game tree which isbuilt on top of finite state machines. Kim et al. [KHKL09]proposed a framework that uses Laplacian motion editingmethod constrained by spatial and temporal features formulti-character interaction such as carrying boxes. Hoyet etal. [HMO12] investigated the perception of causal relation-ships during physical interactions between characters andthey found that four factors are statistically important for re-alistic character animation, i.e., source vs target, timing er-rors, force mismatches, and angular distortions. Our methodalso aims to ensure that the partner’s motion is spatially fit-ted to the user’s motion. The main difference between ourapproach and these methods is that we use a simplified phys-ical model to map the high-dimensional character motionto low-dimensional motion, which make the character morecontrollable and physically plausible.

2.2. Physics-based approaches

Interactions between characters have also been frequentlystudied using physics-based approaches. Zordan and Hod-gins [ZH02] used motion capture data and physical simu-lation to develop a method that simulated upper body mo-tions during boxing and table tennis, where the motions areadapted to external forces. Later, Zordan et al. [ZMCF05]extended the approach to synthesize realistic motions ofa character falling down from punches and kicks. Liu etal. [LHP06] presented a physics-based method for generat-ing multi-character motions from short single-character se-quences. Formulated as a space-time optimization problem,the proposed method was able to generate two-character in-teraction motions in real time from a few constraints for de-sired interactions. Arikan et al. [AFO05] proposed a methodthat selected the best-fit motion and deformed the motion byadapting to the external force using motion capture data.

Physical interactions with other characters have beenstudied also in the field of crowd animation. Henderson sug-gested that pedestrian crowds behave similar to particles orfluids [Hen74]. Based on this assumption, many practicalpedestrian crowd models have been developed to formulatecorrections due to interactions such as collision avoidance

motion for

pendulum 1

motion for

pendulum 2

goal position for

pendulum 1

Figure 2: A coupled inverted pendulum model (CIPC). Theleft pendulum is controlled by a user and the right pendulumis controlled to maintain its distance from the left pendulumat 1.5 m. The red ball on the ground is the goal position forthe left pendulum, which is specified by the user.

and deceleration maneuvers using forces [Hel92]. Gippsand Marksjo proposed a behavioral force model of indi-vidual of pedestrian dynamics [GM85]. Helbing et al. de-fined multiple types of interaction forces that occur amongpeople, including forced pushing, attracting, and maintain-ing a specific distance in both normal and panic situations[Hel92, HFV00]. Hughes et al. modeled pedestrians using2D continuous density field, and derived optimal densityfields using partial differential equations [Hug02, Hug03]Chenney et al. showed that divergence free flows can ef-fectively be used to represent crowd behaviors in denselypacked scenes [Che04]. Treuille et al used dynamic poten-tial field based on continuum theory to produce an impres-sive real-time crowd generation system [TCP06]. Kim et al.presented an interactive algorithm to model physics-basedinteractions in multi-agent simulations, while allowing theagents to anticipate and avoid collisions for local naviga-tion [KGM13]. Lerner et al. presented a data-driven ap-proach for fitting behaviors to simulated pedestrian crowds[LFCCO09]. By fitting the probability of performing eachaction to pedestrian crowds, the natural crowd behavior wasgenerated.

These approaches modeled a character as a particle float-ing on a density field guided by social forces. Our proposedmethod differs from these previous approached because weuse the coupled inverted pendulum model to represent full-body interactions. Our method can generate more realisticmotions because the proposed pendulum model representsthe leaning angle of the body and footsteps due to balanc-ing, as well as the spatio-temporal and physical consistencieslearned from captured two-character motions to improve thenaturalness of interactions.

3. Overview

The proposed system edits an existing two-character motionor generates a new two-character motion in an online processwhile maintaining spatio-temporal consistency and physi-

c© 2014 The Author(s)Computer Graphics Forum c© 2014 The Eurographics Association and John Wiley & Sons Ltd.

J. Hwang, I.H. Suh, T. Kwon / Editing and Synthesizing Two-Character Motions using a Coupled Inverted Pendulum Model

cal consistency observed in natural human interactions. Thecharacters can be controlled to move according to the user’sinput signal.

As shown in Figure 1, the proposed method is divided intotwo steps: motion analysis step (offline) and motion synthe-sis step (online). The input to the motion analysis step is thecaptured two-character motion. Motions of two interactingsubjects are simultaneously recorded so that both individualmotions and the underlying interaction dynamics are cap-tured. From the captured data, the center of mass (COM)trajectories of the two characters are first obtained using apair of human models. Then, a coupled pendulum trajec-tory generator is obtained using an optimization process thatsearches for the best control parameters for the proposedlow-dimensional physical model to track the captured COMtrajectories of the human models. This way, the CIPC modelparameterizes the interaction patterns and motion patterns inthe captured human motions. The trajectory generator for theCIPC is used later in the motion synthesis step for planninga new coupled pendulum trajectory that satisfies the user in-put.

The motion synthesis step has two submodules: interac-tion planning and motion planning. During interaction plan-ning, the module plans a short-horizon future trajectory ofthe CIPC model in an on-line manner based on the currentstate of the CIPC and the control parameters learned dur-ing the analysis step. The motion planning module gener-ates a new motion by modifying the captured two-charactermotion based on the planned trajectory of the CIPC (Fig-ure 8). The proposed method based on the CIPC model hastwo advantages. The user can continuously control the in-teractive motion in real time because the CIPC model mapshigh-dimensional motions onto a relatively low-dimensionalmodel which is easy to control. Also, the resulting individualmotions are physically plausible.

4. Motion analysis

In this section, we explain how the CIPC model representstwo-human interactions. The CIPC model consists of two in-verted pendulum models which we refer as pendulum 1 andpendulum 2. Each pendulum model produces a physicallycorrect trajectory of the pendulum model which is convertedto a natural motion of each character. At the same time, eachpendulum moves in accordance with the other pendulum,and any of the two pendulums can be controlled accordingto user input. The interaction between the two pendulums ismodeled using interaction forces that make the pendulumsfollow the captured interaction patterns while maintaining acertain distance between the two characters.

The motion analysis starts by extracting the necessarydata such as COM, arm, and limb positions from capturedmotions. In section 4.1, we first explain the human modeland pendulum model, and then explain an LQR algorithmfor balancing a pendulum in Section 4.2. A single pendulum

facing direction vector(𝑣𝑓 )

lateral direction vector(𝑣𝑙)

𝑥𝑝1𝐶𝑂𝑀 𝑥𝑝2𝐶𝑂𝑀 𝑑𝑝1

𝜃 𝑝1

(a)

(b)

Figure 3: The local cordinates and interaction parameters.(a) Local interaction direction vectors (representing a localcoordinate) are defined using the COM positions of the indi-vidual pendulums. (b) The distance and the angular velocityabout vertical axis between two characters are defined usingthe local coordinate. The coefficients for the distance and theangular velocity can be computed based on the motion cap-ture data.

model cannot represent interactions between two characters.Simply planning the trajectory of each pendulum indepen-dently and kinematically editing the resulting trajectories topreserve the spatial relation between the characters wouldresult in physically incorrect trajectories of the pendulumsand thus unnatural human motions. To resolve this issue, webuild an additional layer of control algorithm called interac-tion controller that couples the two pendulums, which willbe explained in Section 4.3. Section 4.4 explains how pa-rameters for the interaction controller can be obtained fromthe captured two-character motions.

4.1. Human model and pendulum model

A human model is used for each character to obtain the time-varying trajectories of the body parts and the COM positionfrom the captured joint angles and positions. A human modelhas a total of 44 degrees of freedom (DOF). Each ankle andknee uses a 1-DOF hinge joint, the lower back and neck have2-DOF joints, and the remaining body joints have 3-DOFs.The mass and inertia matrix of each body part are computedfrom a surface mesh from a uniform density assumption anda measured total mass.

Motivated from the IPC model used in [KH10], we map

c© 2014 The Author(s)Computer Graphics Forum c© 2014 The Eurographics Association and John Wiley & Sons Ltd.

J. Hwang, I.H. Suh, T. Kwon / Editing and Synthesizing Two-Character Motions using a Coupled Inverted Pendulum Model

(a) (b)

Figure 4: Mapping two human interactive motions onto theCIPC model. (a) Example where the velocity of the humanCOM is used for mapping. (b) Example where the optimiza-tion procedure is used for mapping. The red ball representsthe human COM and the blue ball represents the COMs ofthe pendulum models. This figure is described using a Vir-tual Reality Modeling Language (VRML) model to clearlyillustrate the differences between the two methods.

the human motion of each individual character to an IPCmodel such that the COM of the IPC model approximatelyfollows the COM trajectory of the character. Each IPC modelcomprises a cart that moves horizontally on the ground withan inverted pendulum connected by an unactuated joint. Us-ing this IPC model, the leaning of the body due to balancingand the footstep patterns for recovery can be encoded as asmooth pendulum trajectory. A new motion of a charactercan be generated by editing the captured reference motion tofollow a newly generated pendulum trajectory.

4.2. Balance Control

The dynamics of a two-dimensional pendulum can be rep-resented using the state space equation of a continuous-timelinear system: s = As+Bu, where s is the pendulum statevector defined by a four-dimensional vector, which includesthe position of the cart and the leaning angle of the pole andtheir time derivatives. The unactuated joint in the pendulummodel makes the model unstable, thus a continuous con-trol is required for balancing. The linear quadratic regulator(LQR) derived from the state equation produces the optimalcontrol forces that act on the cart to stabilize the pendulummotion.

Two two-dimensional LQR controllers are used for bal-ancing the pendulums for two characters. One is used forthe control along the forward direction and the other is forthe lateral direction. In a previous work [KH10], which ourmethod is based on, the forward direction of the pendulumwas set based on the pelvis orientation of the character. How-ever, this approach cannot be used in two-character interac-tions because it ignores the relationships between the twocharacters, and thus, for example, two characters might kickand punch while facing in the opposite directions to eachother.

Instead, we would like to use one regulator to control the

time

𝐿1

𝑥𝑝1 𝑥

pendulum 1

𝑡0 𝑡1 𝑡2 𝑡3

support foot swing foot support foot

Figure 5: The cart trajectory of pendulum 1 and the localfoot coordinates that correspond to the left foot produced bythe footstep pattern generator: the x-values of the predictedpendulum trajectory are defined in the xz plane (red line),the x-values of local foot coordinates are generated by divid-ing the predicted pendulum trajectory into support foot andswing foot (blue line) phases, and the sampled pose of thecharacter is shown at the corresponding time (top figures).

distance to the other character, and the other to control theoverall rotational velocity of the two characters. Becausesuch interaction dynamics between two characters cannoteasily be formulated as a linearized state-space equation dueto inherent non-linearity, randomness and ambiguity in thedynamics, we use a two-level approach where the low-levelbalancing mechanism of each individual pendulum is mod-eled based on a well-established LQR controller, and thecoupling between the two pendulums are built on top of theindividual controllers.

To build an individual controller, we first define a local co-ordinate frame using two ortho-normal vectors vc,c∈{1,2}.Without loss of generality, we focus on character 1 for easeof explanation. The forward-facing direction vector v1

f ,k ofcharacter 1 at frame k is defined as:

v1f ,k = norm(xp2

k −xp1k ), (1)

where xpck ,c ∈ 1,2 represent the COM positions of charac-

ter c at frame k, and norm denotes vector normalization. Thelateral direction vector v1

l,k is obtained by rotating the for-ward direction vector v1

f ,k by 90 degrees about the verticalaxis. These direction vectors define a local coordinate frameto represent a two-character motion in a coordinate invariantmanner (see Figure 3 (a)). We first define each pendulum’sdesired velocity ˆxk using the local coordinate frame:

ˆx1k = α

1v1f ,k +β

1v1l,k, (2)

where α1 and β

1 represent the desired speeds of pendulum 1along the axes of the local coordinates. Then the LQR con-trollers give us an intuitive way to control the pendulum’s de-sired velocity ˆx while maintaining the pendulum’s balance:

u = K(x− ˆx

), (3)

c© 2014 The Author(s)Computer Graphics Forum c© 2014 The Eurographics Association and John Wiley & Sons Ltd.

J. Hwang, I.H. Suh, T. Kwon / Editing and Synthesizing Two-Character Motions using a Coupled Inverted Pendulum Model

(a)

Distance between two characters

Time(frame)

w/o online optimization with online optimization motion capture

(b)

Figure 6: Comparison of the motion without using the on-line optimization procedure and the motion optimized in anonline manner. The motion in (a)-left is the motion withoutoptimization, which the glove of a character interpenetratesopponent’s body. The edited motion in (a)-right keeps itsspatial relationship between two characters. The graph in(b) shows the distances between the two characters(the dis-tance between the two characters in motion capture (black),the distance without optimization (blue), and the distancewith online optimization (red)).

where x and ˆx are the current velocity and the desired veloc-ity of the cart, respectively. u represents the force that acts onthe cart, and K is the gain matrix given by LQR (see detailsin [DAC]).

4.3. Interaction controller

The interaction controller couples two IPCs by applying in-teraction forces between the two IPCs. The interaction forcesare generated by adjusting the desired speed parameters α

1

and β1 based on the spatial relationship between them. We

parameterize these desired speeds using two components;one component considers the motion of character 1 and theother considers the opponent’s motion (that is, character 2):

α1 = k1

1

(dp1− dp1

)+ k1

2, (4)

where k11 is the error feedback gain along the facing direc-

tion, and k12 is the feedforward coefficient. dp1 and dp1 are

the desired distance and the current distance between the twopendulum models (see Figure 3 (b)), respectively. The cur-rent distance dp1 is computed as: dp1 = ‖xp1−xp2‖

The error-feedback along the lateral direction is similarlydesigned:

β1 = k1

3

p1− ˆθ

p1)+ k1

4, (5)

where ˆθ

p1and θ

p1 are the desired and current angular ve-locities (see Figure 3 (b)), respectively. The current angular

Optimization

Time

S1 S2 S3 S4 S5

S2 S3 S4 S5 S6

S3 S4 S5 S6 S7

Figure 7: A multi-stage optimization scheme is used forinteraction parameter estimation. In a stage of optimiza-tion, only a subset of key-frames that belong to frames f ∈{s2,s3,s4} are optimized while the objective functions areevaluated for frames s ∈ {s2,s3, · · · ,s6} to consider a smallwindow of the future.

velocity is computed as : θp1 = β

1/rp1i , where radius rp1

i isdefined as 0.5dp1. Finally, the equation for the desired ve-locity of the coupled pendulum system is organized by sub-stituting Eqs (4) and (5) into Eq (2), and provides an error-feedback mechanism where pendulum 1 and pendulum 2 canbe simultaneously controlled while maintaining the spatialrelationship.

4.4. Interaction parameter estimation

The required control parameters (the desired distance, thedesired angular velocity and the feed-forward terms) can beobtained from the motion capture data by making the pendu-lum controller closely reproduce the captured motion. How-ever, it is not trivial to precisely move the coupled pendu-lum model so that each individual pendulum closely fol-lows the corresponding human model. As shown in Figure4 (a), a deviation occurs even with a good initial trajectoryof the desired velocities calculated from the captured motiondata. Moreover, this error accumulates over time. To calcu-late a trajectory of control parameters for the pendulum thatclosely reproduces the captured reference motion, we usea space-time optimization technique. Specifically, we min-imize an objective function that is defined as the differencebetween the COMs derived from motion capture and CIPC:

{dp1},{ ˆθ

p1}= argmin

dp1, ˆθp1 ∑

k≤N‖project(xc1

k −xp1k )‖2,

(6)where xc1

k and xp1k respectively represent the COM positions

of human 1 and pendulum 1 at the k-th frame of the captured

motion data. {dp1} and { ˆθ

p1} respectively denote sets of

keyframes for the desired distance and angular velocity, asdefined in Eqs (4) and (5). Each key frame is extracted every0.2 second (once per 12 frames). project means the projec-tion to the ground, and α and β in Eq (2) are obtained byoptimizing this objective function. These parameters facili-tate the precise mapping of the pendulum motions onto thecaptured motion data, as shown in Figure 4 (b).

When the captured input motion is long, the dimension-ality of the search space is prohibitively high. Because

c© 2014 The Author(s)Computer Graphics Forum c© 2014 The Eurographics Association and John Wiley & Sons Ltd.

J. Hwang, I.H. Suh, T. Kwon / Editing and Synthesizing Two-Character Motions using a Coupled Inverted Pendulum Model

we experimentally found that the optimization in a high-dimensional space likely lead to a local minima, we use amulti-stage optimization scheme (Figure 7) to reduce the di-mension of the search space. In the f th stage of the opti-mization, only a subset of key-frames that belong to framess∈

{s f ,s f+1,s f+2

}are optimized. The next stage of the op-

timization uses the best solution from the previous stage asthe initial solution. This scheme worked very well in practicebecause the initial solution produces reasonable trajectory ofthe CIPC model, and the subsequent optimizations need toadjust the initial solution only slightly. This optimization canbe performed using a conjugate gradient algorithm in a timeabout 20 times slower than real-time, which is acceptablebecause this optimization needs to be performed only oncein a preprocessing step.

The pendulum motion based on the acquired parametersin Eq (6) allows the CIPC to retain the same interaction pat-terns observed in the motion capture data. Henceforth, werefer to the pendulum trajectories generated using these pa-rameters as the reference coupled pendulum trajectory. Thereference trajectory allows the editing of the two-charactermotions to produce new motions retaining the physical plau-sibility. The resulting desired velocities define a controllerfor the pendulum that reproduces the center of mass trajec-tory of the captured reference motion. The reference pendu-lum trajectory is obtained from the optimized controller. Theoptimized desired distance d and the angular velocity ˆ

θ cal-culated from the forward-facing directions are stored for useat run-time. The resulting pendulum trajectory generator isused for the preview control performed online at every timestep of the simulation of the full human body model.

5. Motion synthesis

During the motion synthesis step, the interactive charactermotions are edited or rearranged in an on-line manner byplanning a new coupled pendulum trajectory and then map-ping the captured two-character motion or a rearranged ver-sion of it to the new coupled pendulum trajectory. The mo-tion synthesis step comprises two modules: interaction plan-ning and motion planning (see Figure 1).

During the interaction planning step, the future trajectoryof the coupled pendulum model is planned based on theuser inputs and motion contexts obtained during the motionanalysis step. In the motion planning step, the footstep pat-terns for each individual character are generated based onthe planned trajectory of the CIPC model, and kinematicallygenerates two-character interactive motions by using an in-verse kinematics solver.

5.1. Interaction planning

The pendulum trajectory generator computes a desired pen-dulum trajectory from the current state of the CIPC model.

(a)

(b)

Figure 8: The sequences of motion are shown to comparethe captured motion (shown in blue) to edited motion (shownin red) Each motion sequence is uniformly sampled every 4frames.

Specifically, the positions and leaning angles of the individ-ual pendulums are first calculated using a forward dynamicssimulation of the CIPC model, and then the local coordinateof the CIPC model is constructed based on the positions ofthe individual pendulums. To plan a new motion pattern, theuser can specify one or both character motions that are dif-ferent from the captured motion by modifying the desireddistance and the angular velocity in Eqs (4) and (5). Whenthe desired control parameters are modified, our controllerautomatically modifies the desired speed α and β in Eq (2)so that the spatial relationship between the two characters aremaintained. Another way to generate a new motion is to usemotion rearrangement. We adapt the method in [KCPS08]for interactive motion synthesis. We use the motion selec-tion algorithm proposed in the paper. Not only the motionsegment but also the control parameters for the CIPC modelfor that specific segment is synthesized in an online mannerby stitching together the trajectory of the control parametersobtained during the analysis step.

The application of motion control by a user and the us-age of different parameters can cause changes in the distancebetween the two-character motions, which affect the spatio-temporal consistency. In this case, the motion of each char-acter must appear natural by maintaining the physical con-sistency, although the interpenetration between two charac-ters may occur, which makes the interaction unrealistic (seeFigure 6 (a)). To maintain the spatio-temporal consistency, itis important that the motion and the context are kept similarusing the motion capture data. To achieve this, we predictthe pendulum motion in a few frames and optimize the valuefunction:

∆dp1i,i f orward

= argmindp1 ∑i≤k≤K

‖project(dck −dp

k )‖2, (7)

c© 2014 The Author(s)Computer Graphics Forum c© 2014 The Eurographics Association and John Wiley & Sons Ltd.

J. Hwang, I.H. Suh, T. Kwon / Editing and Synthesizing Two-Character Motions using a Coupled Inverted Pendulum Model

Angular velocity

between two characters

Time(frame)

new context motion capture context

0

Figure 9: Comparison of the angular velocity of the localcoordinate frame defined by the two characters. The red linerepresents the angular velocity in a new situation, and theblack line represents the angular velocity extracted from themotion capture data.

where ∆dp1i,i f orward

is the offset of the desired pendulum dis-tance, defined in facing vector in Eq (1), i and i f orward arethe current frame and the frame after 0.2 second, respec-tively, K is the number of frames in 0.2 second (In a 60-Hz motion capture scenario like our case, K is 12), and dp

kand dc

k are the distances in the CIPC model and the motioncapture, respectively. By minimizing the difference betweenthe current context and the motion capture context in Eq(7), the new estimated parameter controls the CIPC and ed-its the character, thereby maintaining the consistency duringtwo-character animation (see Figure 6 (a)). This online op-timization process improves the spatio-temporal consistencyby maintaining the distance between the two characters, asshown in Figure 6 (b).

5.2. Motion planning

Adapting the motion planning framework proposed in[KH10], the coupled pendulum trajectory generator andkinematic mapping are combined as shown in Figure 1. Thecoupled pendulum trajectory generator calculates the futurependulum trajectory based on the current pendulum state andthe interaction parameters estimated using Eq (6). In kine-matic mapping, the corresponding character motion is at-tached to the planned coupled pendulum trajectory by solv-ing the inverse kinematics for limbs (see Figure 5). Withthis approach, if a user increases the desired distance de-fined in Eqs (4), the character takes longer step to maintaintheir spatio-temporal consistency. The full-body motion ofa character is represented by a root transformation matrixand the feet positions. the root transformation matrix rep-resenting the pelvis configuration for the human charactermodel in the current frame is obtained using the local coordi-nates computed by using Eq (1). The displacement betweenthe coordinate frame matrix and the root transformation ma-trix can be acquired from the motion capture data and theplanned pendulum trajectory by assuming that the charac-ter’s upper body is rigidly attached to the CIPC model. Thefoot stepping position for each limb represents the position

(a)

(b)

Figure 10: Comparison of the generated motions based ontwo different methods. The motion in (a) is produced usingthe kinematic stitch method, and the edited kick motion in(b) is produced using the CIPC model. Foot slipping occursin (a) and the kinematic stitching scheme degrades physicalplausibility.

where the character steps, and it allows the lower-body mo-tion to be edited. The stepping pattern is generated such thatthe end-effector stays at its desired position during each sup-port phase while it moves from the previous support positionto the next support position along the shortest path during theswing phase. The local coordinates for each limb are gener-ated by sampling the planned cart trajectory of the individualpendulum (see Figure 5). Again, the displacement betweenthe local coordinate and the actual foot configuration is ob-tained from the motion capture data. Based on the estimatedfoot position, the limb motion is generated using the inversekinematics solver described in [KSG02]. The root transfor-mation and the feet positions of character 2 is calculated inthe same manner as character 1.

6. Results

To verify the proposed framework, we performed multi-ple experiments using our MuayThai motion capture data,which were sampled at 60 Hz. The first experiment was con-ducted to reproduce exactly the same two interactive mo-tions with the captured motion in the same context usingthe same parameters acquired in the motion analysis step.Second, we edited the captured motion by simulating dif-ferent contexts to demonstrate the generalizability of theproposed method. To edit two character motions in differ-ent contexts, we modified the interaction parameter acquiredin the motion analysis step by adding a user-specified pa-rameter, duser, θ

user. The user-specified parameter was set tomove the two characters more dynamically, while retainingthe spatio-temporal relationship. For example, when the di-rection parameter is positive, we added a user-specified con-stant to that parameter. Otherwise, we subtracted the sameamount. This way, the overall context was kept the samewhile resulting in more dynamic motion as shown in Fig-ure 8. Without using the online optimization described inSection 5.1, the distance between two pendulums some-times overshoot resulting in visible collisions between thetwo characters. After the online optimization, the interac-

c© 2014 The Author(s)Computer Graphics Forum c© 2014 The Eurographics Association and John Wiley & Sons Ltd.

J. Hwang, I.H. Suh, T. Kwon / Editing and Synthesizing Two-Character Motions using a Coupled Inverted Pendulum Model

tions appeared more natural maintaining their spatial rela-tionship precisely (see Figure 6). The resulting motion re-tained both behavioral and spatio-temporal consistency inmore extreme editing (see Figure 11). Figure 9 shows thedifference of the angular velocity of the local coordinateframe defined by the two characters to show how the in-put motion was changed. In addition, we show that our pro-posed method can be combined with the online synthesismethod proposed in [KCPS08]. Specifically, we used thesame motion selection algorithm, and only replaced the pre-vious kinematic coupling algorithm to our CIPC model. Fig-ure 10 shows the comparison of the resulting motions, afterapplying only the root transformation matrix (a) and after ap-plying the IK solver (b). Figure 10 (a) shows that the charac-ter moves while kicking (it is more natural not to move whilekicking) and the foot slipped. Using the proposed methodbased on the footstep pattern generator shown in Figure 5,the foot position was adjusted adaptively according to thenew situation, and thus the foot slipping was reduced (seeFigure 10 (b)). The next experiment shows that the capturedtwo character motions can be edited according to a user in-put. Finally, we tested a character synthesis controlled by auser input signal. In this experiment, character 1 was con-trolled by a user, and character 2 was automatically con-trolled. Due to the feedback signals, both characters inter-actively adapted to each other. For user inputs, we used thedirectional keys on a keyboard to change the desired distanceand angular velocity defined in Eqs (4) and (5). That is, bypressing the UP and DOWN keys, dp1

user = dp1 + duser anddp1

user = dp1− duser, and by pressing the RIGHT and LEFT

keys, ˆθ

p1user =

ˆθ

p1+ θ

user and ˆθ

p1user =

ˆθ

p1− θ

user. The feed-back signals and online optimization procedure, explained inSections 4.3 and 5.1, continuously maintained the distanceto the opponent. All the results are included in the accompa-nied video too.

7. Conclusion

The proposed control system can edit the motions of two in-teracting characters simultaneously. To animate the motionsof two interactive characters in a natural manner, the mo-tions must be edited while maintaining their spatio-temporaland physical consistency. To achieve this goal, we proposedthe CIPC model ensuring physically plausible interactionsand the maintenance of the spatio-temporal consistency. Themodel is controllable by a user and in response to the oppo-nent’s motion.

There are a few limitations. When the original two-character motion is edited too much, the resulting motionsinevitably contain unnatural motions such as too long stridesor leg intersections. In our experiments, editing toward thedirection where the overall rotational speed is exaggeratedwas much easier and safer than ignoring the original contextand control the character toward the opposite direction fromthe captured movement direction. Carefully looking at this

(a)

(b)

Figure 11: The comparison between the captured motion (a)and the edited motion (b) generated by the proposed frame-work. The proposed framework retains motion quality afterchanging the context. A screenshot is taken every 3 frames.The motions shown in this figures are the same as those inFigure 8.

issue would give us interesting insights about what the CIPCmodel is or is not capable of modeling. We do not considerthe physical contact between the characters. Also, it is as-sumed that the original strength of the attacks such as kicksand punches in the captured reference motion is not modifiedin the editing or synthesis process. The accurate considera-tion of the full-body dynamics would not be easy. However,we believe that physical interactions can be approximatelymodeled using the CIPC model using existing idea such asvirtual forces [PCT∗01].

In a future research, we would like to combine the idea offull-body interaction mesh such as the one proposed by Hoet al. [HKT10] with our model in a physically-based mannerto maintain the spatial relationships among the body partsand to prevent interpenetrations. Ultimately, we also wouldlike to extend our scheme to consider full-body dynamics forinteractive motion generation, and to allow crowd animationby modeling interactions between more than two characters.

8. Acknowledgement

This work was supported by the Global Frontier RD Pro-gram on <Human-centered Interaction for Coexistence>funded by the National Research Foundation of Koreagrant funded by the Korean Government(MEST)(NRF-MIAXA003-2010-0029744). This work was also supportedby Basic Science Research Program through the Na-tional Research Foundation of Korea(NRF) funded bythe Ministry of Science, ICT Future Planning(NRF-2014R1A1A1038386).

References[AFO05] ARIKAN O., FORSYTH D. A., O’BRIEN J. F.:

Pushing people around. In Proceedings of the 2005 ACM

c© 2014 The Author(s)Computer Graphics Forum c© 2014 The Eurographics Association and John Wiley & Sons Ltd.

J. Hwang, I.H. Suh, T. Kwon / Editing and Synthesizing Two-Character Motions using a Coupled Inverted Pendulum Model

SIGGRAPH/Eurographics symposium on Computer animation(2005), ACM, pp. 59–66.

[Che04] CHENNEY S.: Flow tiles. In Proceedings of the 2004ACM SIGGRAPH/Eurographics symposium on Computer ani-mation (2004), Eurographics Association, pp. 233–242.

[DAC] DORATO P., ABDALLAH C., CERONE V.: Linear-quadratic control: an introduction, 1995.

[GM85] GIPPS P. G., MARKSJÖ B.: A micro-simulation modelfor pedestrian flows. Mathematics and Computers in Simulation27, 2 (1985), 95–105.

[HCKL13] HO E. S., CHAN J. C., KOMURA T., LEUNG H.: In-teractive partner control in close interactions for real-time appli-cations. ACM Transactions on Multimedia Computing, Commu-nications, and Applications (TOMCCAP) 9, 3 (2013), 21.

[Hel92] HELBING D.: A fluid-dynamic model for the movementof pedestrians. Complex Systems 6 (1992), 391–415.

[Hen74] HENDERSON L.: On the fluid mechanics of humancrowd motion. Transportation research 8, 6 (1974), 509–515.

[HFV00] HELBING D., FARKAS I., VICSEK T.: Simulating dy-namical features of escape panic. Nature 407, 6803 (2000), 487.

[HK09a] HO E. S., KOMURA T.: Character motion synthesisby topology coordinates. In Computer Graphics Forum (2009),vol. 28, Wiley Online Library, pp. 299–308.

[HK09b] HO E. S., KOMURA T.: Indexing and retrieving mo-tions of characters in close contact. Visualization and ComputerGraphics, IEEE Transactions on 15, 3 (2009), 481–492.

[HK11] HO E. S., KOMURA T.: A finite state machine based ontopology coordinates for wrestling games. Computer Animationand Virtual Worlds 22, 5 (2011), 435–443.

[HKHL13] HYUN K., KIM M., HWANG Y., LEE J.: Tiling mo-tion patches. Visualization and Computer Graphics, IEEE Trans-actions on (2013).

[HKT10] HO E. S., KOMURA T., TAI C.-L.: Spatial relationshippreserving character motion adaptation. ACM Transactions onGraphics (TOG) 29, 4 (2010), 33.

[HMO12] HOYET L., MCDONNELL R., O’SULLIVAN C.: Pushit real: perceiving causality in virtual interactions. ACM Trans-actions on Graphics (TOG) 31, 4 (2012), 90.

[Hug02] HUGHES R. L.: A continuum theory for the flow ofpedestrians. Transportation Research Part B: Methodological 36,6 (2002), 507–535.

[Hug03] HUGHES R. L.: The flow of human crowds. Annualreview of fluid mechanics 35, 1 (2003), 169–182.

[KCPS08] KWON T., CHO Y.-S., PARK S. I., SHIN S. Y.: Two-character motion analysis and synthesis. Visualization and Com-puter Graphics, IEEE Transactions on 14, 3 (2008), 707–720.

[KGM13] KIM S., GUY S. J., MANOCHA D.: Velocity-basedmodeling of physical interactions in multi-agent simulations. InProceedings of the 12th ACM SIGGRAPH/Eurographics Sympo-sium on Computer Animation (2013), ACM, pp. 125–133.

[KH10] KWON T., HODGINS J.: Control systems for humanrunning using an inverted pendulum model and a referencemotion capture sequence. In Proceedings of the 2010 ACMSIGGRAPH/Eurographics Symposium on Computer Animation(2010), Eurographics Association, pp. 129–138.

[KHKL09] KIM M., HYUN K., KIM J., LEE J.: Synchronizedmulti-character motion editing. In ACM Transactions on Graph-ics (TOG) (2009), vol. 28, ACM, p. 79.

[KLLT08] KWON T., LEE K. H., LEE J., TAKAHASHI S.: Groupmotion editing. In ACM Transactions on Graphics (TOG) (2008),vol. 27, ACM, p. 80.

[KNK∗04] KAJITA S., NAGASAKI T., KANEKO K., YOKOI K.,TANIE K.: A hop towards running humanoid biped. In Roboticsand Automation, 2004. Proceedings. ICRA’04. 2004 IEEE Inter-national Conference on (2004), vol. 1, IEEE, pp. 629–635.

[KPS03] KIM T.-H., PARK S. I., SHIN S. Y.: Rhythmic-motionsynthesis based on motion-beat analysis. In ACM Transactionson Graphics (TOG) (2003), vol. 22, ACM, pp. 392–401.

[KSG02] KOVAR L., SCHREINER J., GLEICHER M.: Footskatecleanup for motion capture editing. In Proceedings of the 2002ACM SIGGRAPH/Eurographics symposium on Computer ani-mation (2002), ACM, pp. 97–104.

[LCF05] LAI Y.-C., CHENNEY S., FAN S.: Group mo-tion graphs. In Proceedings of the 2005 ACM SIG-GRAPH/Eurographics symposium on Computer animation(2005), ACM, pp. 281–290.

[LFCCO09] LERNER A., FITUSI E., CHRYSANTHOU Y.,COHEN-OR D.: Fitting behaviors to pedestrian simulations. InProceedings of the 2009 ACM SIGGRAPH/Eurographics Sympo-sium on Computer Animation (2009), ACM, pp. 199–208.

[LHP06] LIU C. K., HERTZMANN A., POPOVIC Z.: Composi-tion of complex optimal multi-character motions. In Proceedingsof the 2006 ACM SIGGRAPH/Eurographics symposium on Com-puter animation (2006), Eurographics Association, pp. 215–222.

[LL06] LEE J., LEE K. H.: Precomputing avatar behavior fromhuman motion data. Graphical Models 68, 2 (2006), 158–174.

[MP07] MCCANN J., POLLARD N.: Responsive characters frommotion fragments. In ACM Transactions on Graphics (TOG)(2007), vol. 26, ACM, p. 6.

[PCT∗01] PRATT J. E., CHEW C.-M., TORRES A., DILWORTHP., PRATT G. A.: Virtual model control: An intuitive approachfor bipedal locomotion. I. J. Robotic Res. 20, 2 (2001), 129–143.

[SKSY08] SHUM H. P., KOMURA T., SHIRAISHI M., YA-MAZAKI S.: Interaction patches for multi-character animation.In ACM Transactions on Graphics (TOG) (2008), vol. 27, ACM,p. 114.

[SKY07] SHUM H. P., KOMURA T., YAMAZAKI S.: Simulatingcompetitive interactions using singly captured motions. In Pro-ceedings of the 2007 ACM symposium on Virtual reality softwareand technology (2007), ACM, pp. 65–72.

[SKY12] SHUM H. P. H., KOMURA T., YAMAZAKI S.: Simulat-ing multiple character interactions with collaborative and adver-sarial goals. Visualization and Computer Graphics, IEEE Trans-actions on 18, 5 (2012), 741–752.

[Sug08] SUGIHARA T.: Simulated regulator to synthesize zmpmanipulation and foot location for autonomous control of bipedrobots. In Robotics and Automation, 2008. ICRA 2008. IEEEInternational Conference on (2008), IEEE, pp. 1264–1269.

[TCP06] TREUILLE A., COOPER S., POPOVIC Z.: Contin-uum crowds. In ACM Transactions on Graphics (TOG) (2006),vol. 25, ACM, pp. 1160–1168.

[WAH∗10] WAMPLER K., ANDERSEN E., HERBST E., LEE Y.,POPOVIC Z.: Character animation in two-player adversarialgames. ACM Transactions on Graphics (TOG) 29, 3 (2010), 26.

[ZH02] ZORDAN V. B., HODGINS J. K.: Motion capture-drivensimulations that hit and react. In Proceedings of the 2002 ACMSIGGRAPH/Eurographics symposium on Computer animation(2002), ACM, pp. 89–96.

[ZMCF05] ZORDAN V. B., MAJKOWSKA A., CHIU B., FASTM.: Dynamic response for motion capture animation. ACMTransactions on Graphics (TOG) 24, 3 (2005), 697–701.

c© 2014 The Author(s)Computer Graphics Forum c© 2014 The Eurographics Association and John Wiley & Sons Ltd.