image-based authoring of herd animations › hal-02127824 › file › main.pdf · image-based...

12
HAL Id: hal-02127824 https://hal.inria.fr/hal-02127824 Submitted on 17 May 2019 HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Image-based Authoring of Herd Animations Pierre Ecormier-Nocca, Julien Pettré, Pooran Memari, Marie-Paule Cani To cite this version: Pierre Ecormier-Nocca, Julien Pettré, Pooran Memari, Marie-Paule Cani. Image-based Authoring of Herd Animations. Computer Animation and Virtual Worlds, Wiley, In press, 30 (3-4), pp.1-11. 10.1002/cav.1903. hal-02127824

Upload: others

Post on 03-Jul-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Image-based Authoring of Herd Animations › hal-02127824 › file › main.pdf · Image-based Authoring of Herd Animations Pierre Ecormier-Nocca1, Julien Pettre´2, Pooran Memari

HAL Id: hal-02127824https://hal.inria.fr/hal-02127824

Submitted on 17 May 2019

HAL is a multi-disciplinary open accessarchive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come fromteaching and research institutions in France orabroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, estdestinée au dépôt et à la diffusion de documentsscientifiques de niveau recherche, publiés ou non,émanant des établissements d’enseignement et derecherche français ou étrangers, des laboratoirespublics ou privés.

Image-based Authoring of Herd AnimationsPierre Ecormier-Nocca, Julien Pettré, Pooran Memari, Marie-Paule Cani

To cite this version:Pierre Ecormier-Nocca, Julien Pettré, Pooran Memari, Marie-Paule Cani. Image-based Authoringof Herd Animations. Computer Animation and Virtual Worlds, Wiley, In press, 30 (3-4), pp.1-11.�10.1002/cav.1903�. �hal-02127824�

Page 2: Image-based Authoring of Herd Animations › hal-02127824 › file › main.pdf · Image-based Authoring of Herd Animations Pierre Ecormier-Nocca1, Julien Pettre´2, Pooran Memari

Image-based Authoring of Herd Animations

Pierre Ecormier-Nocca1, Julien Pettre2, Pooran Memari1, Marie-Paule Cani11LIX, Ecole Polytechnique, CNRS 2Inria

Figure 1: To create a herd animation, the user selects the desired visual aspects over time on photos,by placing them as key-frames on the terrain (top left). The analysis-synthesis methodgenerates, for each photo, a visually similar key-herd with the target number N of animals(bottom left). The animation is then generated (right), thanks to an automatic labellingprocess to pair the animals of successive key-herds.

Abstract

Animating herds of animals while achiev-ing both convincing global shapes and plausi-ble distributions within the herd is difficult us-ing simulation methods. In this work, we allowusers to rely on photos of real herds, which arewidely available, for key-framing their anima-tion. More precisely, we learn global and localdistribution features in each photo of the inputset (which may depict different numbers of ani-mals) and transfer them to the group of animalsto be animated, thanks to a new statistical learn-ing method enabling to analyze distributions ofellipses, as well as their density and orientationfields. The animated herd reconstructs the de-sired distribution at each key-frame while avoid-ing obstacles. As our results show, our method

offers both high level user control and help to-wards realism, enabling to easily author herd an-imations.

Keywords: image-based animation, distributionsynthesis, crowd simulation.

Introduction

While methods for easing the generation ofcomplex virtual worlds widely spread in the lastdecade - leading to impressive results with de-tailed terrains as well as plausible distributionsof rocks and vegetation - much less attentionwas paid so far to animal life. Yet, populatingvirtual worlds not only with vegetation blowingin the wind and a few birds, but also with mov-ing groups of ground animals, is mandatory toget a lively result.

Page 3: Image-based Authoring of Herd Animations › hal-02127824 › file › main.pdf · Image-based Authoring of Herd Animations Pierre Ecormier-Nocca1, Julien Pettre´2, Pooran Memari

This work tackles the authoring of herd an-imations. The key idea is to enable intuitiveauthoring and enhanced realism through visualanalogies with real photos. More precisely, weallow the user to key-frame the animation of aherd by copy-pasting a series of photos over theterrain using a pipette tool. In addition to therough trajectories they define, each photo is usedas a model for the local shape, distribution anddensity of the animals within the herd.

The input photos may indeed show quite dif-ferent numbers of animals. Therefore, our solu-tion relies on a new statistical method to analysethe photos independently from animals countand then synthesize a visually similar herd whileaccounting for the target number of animals,their size and local obstacles avoidance. In addi-tion to the statistical distributions governing therelative distance between animals (modeled asoriented ellipses), our model extracts and repro-duces two higher level features from each photo,namely the herd’s density map (also defining itsglobal shape) and the local orientation map foranimals within the herd. If desired, these fea-tures can be edited at high level by the user usingbrush strokes.

Lastly, we propose a simple method to gen-erate the individual trajectories of the animalsfrom a herd key-frame to the next. It includes anautomatic best-pairing mechanism between ani-mals in successive key-herds and a microscopicsimulation enabling animals to avoid collisionwhile following the trajectory and having the de-sired relative motion in herd frame.

In summary, our three main contributions are:• An intuitive authoring tool for key-framing

herds, using the concept of pipette tool forcopy-pasting the herd visual aspect froma photo and providing additional editingbrushes;

• A new, two-levels method enabling to ana-lyze both global herd features and local dis-tributions of animals on images, and to syn-thesize visually similar ones with a given,target number of animals;

• A layered model using both the herd globaltrajectory and the desired relative motionwithin the herd to generate the animationbetween successive key-frames.

Related work

Animating animals: Microscopic simulationhas long been the standard method for animatingsmall groups of ground creatures such as herdsof animals (see [1] for a survey). Reynold’spioneer work on Boids [2, 3] introduced threeinteraction forces of different ranges to respec-tively model cohesion, relative orientation andcollision avoidance within a flock. Subsequentimprovements, validated on groups of humans,include velocity-based [4] and vision-based [5].Recent work also tackled very specific animalbehavior such as insect swarms [6, 7]. De-spite of their ability to generate nice, plausi-ble motion, microscopic methods are user inten-sive since many trials and errors may be nec-essary to tune parameters and author an anima-tion. While some methods can be used to ani-mate flocks [8, 9] or pedestrian groups [10, 11]matching specified shapes, control over the dis-tribution of elements within the shapes is notcovered by the scope of these papers. In thiswork, we only use microscopic simulation to in-terpolate between herd key-frames, enabling theuser to transfer the visual aspect of herds fromphotos.

Image-based animation: In the case of hu-mans, trajectories extracted from videos ofcrowds were used to learn parameters of mi-croscopic models [12, 13], enabling to facilitatetheir tuning.

These methods however involved either alarge amount of manual labour or complex ex-traction methods, on top of requiring multipleframes of video data for each situation or crowdtype. Note that such video data, showing thesame creatures during a number of consecutiveframes, are much more difficult to acquire foranimals. In contrast, our method is based inthe analysis of a few static pictures of herdswhich may represent different numbers of ani-mals, making data very easy to acquire.

Analyzing real images was already used foranimal animation, but in the context of key-framing the motion of a single animal, etherfrom a video [14] or from a static picture ofa herd [15]. It was never applied, to our bestknowledge, to learn the visual features of herdssuch as global shapes, animal distributions, den-

Page 4: Image-based Authoring of Herd Animations › hal-02127824 › file › main.pdf · Image-based Authoring of Herd Animations Pierre Ecormier-Nocca1, Julien Pettre´2, Pooran Memari

sity and relative orientations, the problem we aretackling here.

Learning and synthesizing distributions:Among recent authoring methodologies for vir-tual worlds, a strong interest was recently shownon learning statistical distributions of elements(such as rocks or vegetation) from exemplars,the latter being either user-defined, extractedfrom photos, or from simulation results. Afterlearning using a pipette tool, interactive brusheswere used to control synthesis over full regionsof a terrain [16, 17]. In parallel, continuous PairCorrelation Functions (PCF) were introducedfor more robustly handling statistical learningand synthesis [18, 19].

Although recent improvements tackled distri-butions of 2D shapes instead of points, the ex-isting methods are not applicable to analyzingherd pictures: Firstly, most of them only handlenon-overlapping shapes [20, 21], while animalswithin herds are often in close contact. The onlymethod explicitly handling contacts and over-laps [22] is limited to disks and is not suited toour data, where animals are anisotropic and ori-ented. Secondly, all current methods tackled theanalysis of a supposedly uniform distribution ofelements. In contrast, the overall shape of a herdis important and the density of animals usuallylocally varies depending on distance to border.Lastly, the distribution of orientations within aherd is also an important visual feature. There-fore, we introduce our own analysis synthesismethod for herds, based on the combination ofPCFs with density and orientation maps.

Overview of the Authoring tool

We present a method for authoring herd anima-tions based on data extracted from still photos.

Authoring interface

Our interactive tool enables the user to upload avirtual world which may contain a non-flat ter-rain with obstacles such as trees and rocks (topleft of Figure 1). The user then choose a type ofanimal in a library, sets their maximal maximalspeed Vmax (which can be extracted if desired

from zoological information) and their numberN in the herd to be animated.

Our framework is inspired by traditional ani-mation principles, namely the key-framing andinterpolation workflow. In the reminder of thissection, the key-frames correspond to importantlocations where the user requests a specific as-pect for the herd, extracted from real world’spictures. In practice, the user can also addposition-only key-frames to edit the trajectorywithout imposing a specific aspect of the herd.

Using an analogy with drawing software, weallow the user to upload reference photos of realherds in a palette window and to extract the herdaspect from them using a pipette tool to definekey-frames. To provide an intuitive visual feed-back, the selected photo is copy-pasted on theterrain at the position where the user would liketo impose this specific aspect of the herd.

As in usual pipelines, the animation process isfully automatic once key-frames have been po-sitioned on a timeline. Yet, the user can not onlyedit the timing and choice of reference imagesat key-frames, but also some high level parame-ters, namely the density and orientation maps ateach key-frame, as detailed below.

Method and challenges

Key-herds from photos: From each key-frame position with an associated photo, the firststep is to generate a key-position for a herd withthe target number N of animals, which we call akey-herd (see bottom left of Figure 1). The lat-ter should reproduce the visual aspect in termsof global shape, distribution and orientation ofanimals of the herd on the photo, while avoidingobstacles. The main challenges are to define andlearn the right set of global and local descriptorsfrom the photo, and then define an algorithm en-abling to use them at the synthesis stage. Our so-lutions are detailed in the Analysis and synthesissection.

Animation: Once key-herds have been gener-ated and projected onto the terrain, a global tra-jectory is computed between their centroids. In-dividual trajectories are then automatically gen-erated for each animal. Achieving a fluid mo-tion is a challenge, since it requires consistently

Page 5: Image-based Authoring of Herd Animations › hal-02127824 › file › main.pdf · Image-based Authoring of Herd Animations Pierre Ecormier-Nocca1, Julien Pettre´2, Pooran Memari

labelling the animals within the successive key-herd. Moreover, the individual trajectories needto insure collision avoidance between animalsand with obstacles on the terrain, while follow-ing the herd global path and matching the re-quired change of position in herd frame. Oursolutions to these problems are detailed in theHerd animation section.

Analysis and synthesis of herds

Our processing pipeline for generating a key-herd from a photo is detailed in Figure 2. Thephoto is first pre-processed to extract orientedellipses giving the position and orientation of theanimals. We analyze this resulting distributionto compute correlation curves modeling the lo-cal interactions between animals, as well as ed-itable features in the form of density and orien-tation maps. We take the global orientation ofthe herd trajectory, the possible obstacles on theterrains and the size of the target herd to mod-ify these maps, which may also be interactivelyedited. Finally, we use both curves and featuresto synthesize a perceptually similar herd of theright size. We details these steps below.

Data extraction from a single image

We use a semi-automatic method to extract datafrom an image. It only requires manual tuningin ambiguous regions and greatly eases the ex-traction work for large images.

We ask the user to manually annotate a fewspots on the image (see Figure 3) as foreground(the animals) or background (for ground or otherunwanted features). A specific Support VectorMachine (SVM) is then created and trained todiscriminate foreground from background pix-els based on their colour in the user-definedannotations, and subsequently used to create ablack and white mask of these features. Differ-ent regions, usually corresponding to differentanimals, are then extracted using a WatershedTransform. Finally, we fit ellipses on each ofthese regions to retrieve their position, orienta-tion and dimension. The global direction givento all ellipses is chosen arbitrarily for an imageand can be flipped if necessary.

Some animals can be wrongly captured as a

single, large region if they were too close to eachother. We detect these cases with an analysison the distribution of extracted region sizes. Insuch situations, the created ellipses are too large,too small or of unusual proportions. They areautomatically detected and manually correctedwith the right shapes.

A PCF-based method for interactions

We model the local interactions between ani-mals by extending the concept of the Pair Cor-relation Function (PCF) to distributions of el-lipses within arbitrary shapes. PCFs are smoothfunctions that represent the average density ofobjects in a distribution with respect to an in-creasing distance between those objects. Theyare known to have many interesting propertiessuch as being density invariant and characteris-ing well the visual aspect of distributions (see[22] for details).

The fact that PCF are smooth curves al-lows robustness to noise. They are computedby weighting the contributions of neighbors atdistance r with the Gaussian Kernel kσ(x) =

1√πσ

e−x2/σ2. For the case of a 2D point distri-

bution within the unit square domain, the PCFcan be expressed by:

PCF(r) =1

Arn2

∑i, j

kσ(r − di j) (1)

where Ar is a normalisation factor defined as thearea of the ring of radius r and thickness 1, anddi j is the distance between points i and j.

From point to ellipse distributions: To ex-tend the framework to distributions of ellipses,we need an appropriate distance encoding boththe size and the orientation of each ellipse. Wesimplify the problem of computing an ellipse-to-ellipse distance by using two ellipse-to-pointdistance computations, one for each ellipse ofevery pair. This choice is consistent with the factthat PCFs compute and average the distances ofeach element w.r.t. the others in the distribution.

In order to take into account the size andthe orientation of the elements while comput-ing ellipse-to-point distances, each point is ex-pressed in local coordinates relative to the el-lipse. More formally, for the point (x, y) in lo-cal coordinates, we compute its distance to the

Page 6: Image-based Authoring of Herd Animations › hal-02127824 › file › main.pdf · Image-based Authoring of Herd Animations Pierre Ecormier-Nocca1, Julien Pettre´2, Pooran Memari

Environment + Constraints

Input data

Correlation curves

Editable descriptors

Generated descriptors

Result

Figure 2: Pipeline for computing a key-herd: From oriented ellipses extracted from the photo (topleft), correlation curves and editable descriptors are computed. The descriptors are adaptedto the trajectory orientation and obstacles (bottom), and then used in conjunction with thecorrelation curves to produce a key-herd with N animals (right).

Figure 3: Data extraction. From left to right, topto bottom: input, annotated image, bi-nary segmentation, extracted ellipseswith detected outliers in red.

axis-aligned ellipse using:

d(x, y) =

√x2

a2 +y2

b2 (2)

where x and y are the local coordinates of thepoint, and a, b re respectively the semi-majorand semi-minor axes of the ellipse. More intu-itively, this distance represents how much largerthe ellipse would have to be for the point to belying on its contour. This scaling value is equalto 1 for all points on the ellipse.Generalising the domain: Another challengeis to adapt the formulation of PCFs to arbitrarydomains. Indeed, the domain shape needs to be

taken into account in the PCF equation in or-der to correct the bias induced by its borders.A usual correction is to take weigh the numberof neighbors by the ratio between the area orperimeter of a disk and its intersection with thedomain. While this is relatively straightforwardfor a square domain, it can become quite com-plex for arbitrary shapes, and we would lose toomuch information by reducing herds to squares.

Instead, we choose to define the domain asthe convex hull of the data points, which is anarbitrary convex polygon. Although accuratelycomputing the area of an elliptic ring inside aconvex polygon is complex, we can efficientlyprovide a good approximation of the solution:We consider an approximation of the ellipse as acollection of as many triangles as there are edgesin the convex hull of the domain. Each of thesetriangles is composed of the origin of the ellipse,and two consecutive points on the hull polygon.We translate the points of the hull to the inter-section between the ellipse and the convex hull.The area of the part of the ellipse in the domainis computed as the sum of areas of the triangles.The area of the inner ellipse is subtracted to getthe area of the elliptic ring within the domain.

In practice these computations are only donefor a few rings at regular distances around eachpoint, the weighing coefficients given by the ra-tio of the ring in the domain vs. the area of the

Page 7: Image-based Authoring of Herd Animations › hal-02127824 › file › main.pdf · Image-based Authoring of Herd Animations Pierre Ecormier-Nocca1, Julien Pettre´2, Pooran Memari

full ring being interpolated in-between.

Editable descriptors

On top of the correlation curves used to keeptrack of the local relationship between objects,at least one other descriptor, the orientationfield, is required to produce a result faithful tothe original data. One last optional descriptor,the density map, can also be used in conjunctionwith the rest of framework to replicate the den-sity of regions in the same location in our output.Orientation field: The orientation field is de-fined everywhere in the domain, and allows us toassign a plausible orientation at any hypotheticalpoint within the original convex hull.

We compute this field by extracting the De-launay triangulation of the centers of the ex-tracted ellipses representing animals, and as-signing the orientation of the ellipse to each ver-tex of the resulting mesh, as can be seen in Fig-ure 4. When querying the field for the orien-tation of a new arbitrary point, we find the tri-angle that contains this location and interpolatethe three angles of the vertices using barycentriccoordinates.Density map: Extracting density maps as wellhelps creating distributions as close to the orig-inal as possible by reproducing the position ofthe denser areas and empty regions.

Our approach to do so is similar to the oneused for the orientation field. From the De-launay triangulation of the ellipse centers, weuse the area of the 1-ring, i.e. the collectionof neighboring vertices, as an indicator of howmuch free space is available around at this loca-tion. We take the inverse of this area as an ap-proximation of the local density and assign it toeach point of the triangulation. We use barycen-tric coordinates interpolation withing each trian-gle to define density everywhere in the domain.

Key-herd synthesis algorithm

The editable descriptors computed from the in-put image are first transformed and modified toaccount for the general orientation of the herdtrajectory at the key-frame, the obstacles in theenvironment and the size of the herd (each an-imal being of constant size, a herd of 100 willnot take the same surface area on the terrain as

a herd of 1000): after rotation and projectionof the density map on the terrain, the part cov-ered by obstacles is masked out. the map in thenscaled in or out until it fits the requested num-ber of animals. The same transformation is thenapplied to the orientation map.

Our synthesis algorithm, taking as input thePCFs of the input distribution, the density andorientation maps, and the number of ellipses re-quired in the target distribution, is summarizedin Algorithm 1, and further detailed in this sec-tion. The method used is a modified version ofthe dart throwing algorithm.

Input: PCF, orientation field O, densitymap D, number of elements N

Output: Ellipse distribution

f ails← 0 ;while < N elements accepted do

Sample ellipse from D;Sample orientation from O;Update PCF with new ellipse;if error decreased or

f ails > max f ails thenAccept best ellipse;f ails← 0;continue;

endf ails← f ails + 1;

endAlgorithm 1: Synthesis algorithm

We first pick a random ellipse respecting theprobability distribution of the density map. Thissampling is done by computing running sums ofdensities per column of a discretized version ofthe density map, and then from one column tothe next. This gives us the probability of havinga point in each column, and for each column theprobability of having a point at each pixel. Weuse these cumulative density functions to sam-ple random ellipses according the density map.A rotation is then assigned to the ellipse usingan orientation field query at its location. Weupdate the current PCF with the impact of thenew ellipse and compute the error E comparedto the target PCF, after normalization to accountfor the difference in element count. We use:

E =∑

r

(PCF(r) − PCF0(r)

)2

PCF0(r)(3)

Page 8: Image-based Authoring of Herd Animations › hal-02127824 › file › main.pdf · Image-based Authoring of Herd Animations Pierre Ecormier-Nocca1, Julien Pettre´2, Pooran Memari

Figure 4: The descriptors are computed by interpolation inside triangles from the Delaunay triangula-tion of the input data.

where r spans distances to the ellipse.The ellipse is accepted if adding it to the dis-

tribution reduces the error. Otherwise, to makesure that the algorithm does not get stuck in aninfinite loop, we keep track of the best candi-dates while searching for ellipses. If the algo-rithm cannot find a good enough solution beforereaching a threshold max f ails, the best candi-date from the previous tries is accepted.

Descriptors as control tools

While it is possible to extract every componentof our model from images and use them to createkey-herds, the user can also edit the orientationmap and density field to obtain a finer controlover the result.

The density map is a completely optional pa-rameter and removing it will yield an image thatis similar but has local density extrema in differ-ent places. Manually painting a density map canbe used for artistic purposes to alter the look ofthe resulting distribution.

Similarly, editing the orientation field leads toanother form of control over the result. Indeed,replacing every orientation in the field by thesame orientation produces a uniform flow, andsmoothing or adding noise to the field can beused to control the level of detail in the emerg-ing distribution.

The extent of this control is shown in Fig-ure 5, with the top row showing the effects oforientation field changes while the bottom rowshowcases the impacts of different density maps.Sub-figure 5b shows the result after mergingthe orientation field with orientations pointing

straight right, and figure 5c shows the effect ofa pure rotational orientation field on this exam-ple. While changes in the density map are morediscreet, its effects can still clearly be seen. In-deed, the most important empty spots are mainlylocated to the left of the herd in figure 5e whilethey are in the center in sub-figure 5f.

Herd animation

Global herd trajectory

In this work, we extend the idea of key-framingan animation to a group of animals. While tra-ditional key-frames can be used to represent im-portant poses of an individual character, our key-herds encode a full group of animals at a specificpoint in time.

From this input, a global trajectory for theherd, modeled using an interpolation spline be-tween the centroids of key-herds and the extraposition-only key-frames, is generated and pro-jected onto the terrain. The herd can be seen asa local frame that moves along this global tra-jectory, and within which the animals will havesome adequate relative motion.

Generating individual trajectories

We first compute a consistent labeling to pair an-imals within successive key-herds, and then usemicroscopic simulation to define individual tra-jectories, as described next.

Pairing based on optimal transport: Estab-lishing a good correspondence between the ani-

Page 9: Image-based Authoring of Herd Animations › hal-02127824 › file › main.pdf · Image-based Authoring of Herd Animations Pierre Ecormier-Nocca1, Julien Pettre´2, Pooran Memari

(a) Original descriptors (b) Smoothed orientation (c) Spiral orientation

(d) Uniform density (e) Left-to-right density (f) Radial density

Figure 5: Effect of the editable descriptors on the synthesised result, with changes to the orientationfield (top) and density map (bottom).

mals of a key-herd and the following one is es-sential to generate fluid motion. In our work,we use optimal transport [23] for this compu-tation, after virtually superposing the two suc-cessive key-herds to best align their centroidsand orientations (by convention, the X axis ina local frame). More precisely, the mapping iscomputed as the optimal transport of one unitof mass for each animal located in local co-ordinates relative to the key-herd, to the localpositions of the animals in the next key-herd.Depending on the type of animation required,the metric used for this computation can be theEuclidean distance, the Euclidean distance tothe nth power to increase the penalty on distantmappings, or a metric that penalises matchingalong one axis more than the other.

Herd-frame-guided simulation: The key ideato generate individual animal trajectories thatboth interpolate their assigned position in twosuccessive key-herds, but also follow the herdtrajectory in-between, is to use microscopic sim-ulation following moving guides defined in theherd frame. This way, while the guides modelthe necessary relative motion within the herd,the steering behavior in world frame enables toavoid collisions with other animals and with ob-stacles.

We use a simple steering model based on fiveforces to generate individual motion. The firstthree forces are separation, alignment and co-

hesion [2]. They are responsible for the gen-eral behaviour of the individuals. They are com-pleted by an extra force to handle collision withthe environment: it steers the animals away fromobstacles if their predicted future position giventheir current position and velocity gets too close.

The last force controls the animal movementby giving them guides to follow. The latterstraightly move, in the local herd frame, fromtheir previous position in a key-herd to the next.Guide positions are transformed to world frameat each time step, while the herd frame followsits global trajectory. In addition, to accountfor the actual speed of the associated animals,which can be slowed down by other interactions,the position of the herd frame along its trajectoryis set to move forwards only when all the asso-ciated animals are about to reach their guide.

Note that in our current implementation, theorientation of animals can be computed eitherfrom their movement or by interpolating the ori-entations of successive key-herds.

Results and discussion

Our framework implemented in Python for herdanalysis and synthesis and in C# on the UnityEngine for user interaction and rendering. Tim-ings reported in Table 1 were run on an IntelCore i5 processor at 3.3GHz equipped with 8GBof memory.

Page 10: Image-based Authoring of Herd Animations › hal-02127824 › file › main.pdf · Image-based Authoring of Herd Animations Pierre Ecormier-Nocca1, Julien Pettre´2, Pooran Memari

Figure 6: Generated herds with different target herd sizes. The size in the input image is in red.

Figure 7: Frames from an animation showing in-teraction with an obstacle.

Herd size Ex.1 Ex.2 Ex.3N = 1054 N = 43 N = 220

5 1.8s 0.5s 0.7s25 1.9s 0.5s 0.8s

100 2.2s 5.1s 6.2s200 3.5s 16.2s 15.4s500 15.9s 73.4s 59.3s

Table 1: Benchmark of synthesis time

Figure 6 shows synthesis results for three dif-ferent input images depicting herds of widelyvarying sizes and visual patterns. For each in-put, generation is performed with different herdsizes to illustrate the robustness of our method.

Animation results are depicted in Figures 1and 7. As these results show, our methodcan handle challenging animation scenarios withsignificant changes in the herd shapes due touser input or environmental constraints, whilemaintaining the global aspect of the herd distri-bution learned from the input herd photos. Ademo is provided in the supplementary video.

Limitations: Firstly, although orientation fieldsare properly reconstructed at herd key-frames,we failed to achieve an animation of individ-ual animals that matches both this orientationfield and results in natural motion, as can beseen in the supplementary material and Figure 8.This comes from the fact that linearly rotatingguides between their target position and addingangular forces within the steering behavior pro-duces strange behavior such as animals movingside-way or even back-way. Achieving anima-tion that match orientation constraints is there-

Page 11: Image-based Authoring of Herd Animations › hal-02127824 › file › main.pdf · Image-based Authoring of Herd Animations Pierre Ecormier-Nocca1, Julien Pettre´2, Pooran Memari

Figure 8: Compared to computing orientationsbased on animal movement (top), in-terpolating orientations between key-herds (bottom) helps to reproduce va-riety in a herd, but can result in side-way movement.

fore left for future work. Secondly, althoughwe offer a precise control of the statistical dis-tributions of animals at key-frames, our methoddoes not offer any guarantee on the plausibil-ity of the distribution in-between. In particular,even when the same reference image is used fortwo successive key-herds, the fact we use micro-scopic simulation to process local interactionsin-between, may alter the intermediate distribu-tion. A last limitation is the difficulty to repro-duce constrained formations as the queue of an-imals in Figure 6, bottom. This is due to theinterpolation of the density map, which diffusesdensity where there was none in the input image,and to the distance used that does not discrimi-nate front from side, apart for the inherent shapeof the ellipses.

Conclusion and future work

We presented an easy to use authoring tool forherd animation. It enables the user to key-framea herd over a terrain by extracting the succes-sive visual appearances from photos, while auto-matically taking the 3D environment constraintsas well as the target number of animals into ac-

count. The animals in successive key-frames arethen automatically paired and animated.

In addition to matching orientation fields, theanimation could also be enriched using someknown animal features such as their type ofmovement (going from Brownian motion forflying insects to smooth displacements or jump-ing for other animals).

Among the original features of our method,learning herd animation from photos rather thanvideos was a design choice. Indeed, images al-low more efficient learning are also much moreaccessible. In the context of herd animation,finding segments of videos where each animalremains visible for a long enough time periodwould have been difficult.

Still, the velocity of animal motion cannotbe learned from a photo. Being able to ex-tract it from videos, even short and with onlya few fully visible animals, would be an excel-lent complement to our work, which we plan toinvestigate in the future.

References

[1] L. Skrba, L. Reveret, F. Hetroy, M-P.Cani, and Carol O’Sullivan. AnimatingQuadrupeds: Methods and Applications.Computer Graphics Forum, 28(6), 2009.

[2] Craig W. Reynolds. Flocks, herds andschools: a distributed behavioral model.ACM SIGGRAPH Computer Graphics,21(4), 1987.

[3] Craig W Reynolds. Steering behaviors forautonomous characters. 1999.

[4] Sebastien Paris, Julien Pettre, andStephane Donikian. Pedestrian reactivenavigation for crowd simulation: a predic-tive approach. Computer Graphics Forum,26(3), 2007.

[5] Jan Ondrej, Julien Pettre, Anne-HeleneOlivier, and Stephane Donikian. Asynthetic-vision based steering approachfor crowd simulation. ACM Transactionson Graphics, 29(4), 2010.

[6] Xinjie Wang, Xiaogang Jin, ZhigangDeng, and Linling Zhou. Inherent noise-

Page 12: Image-based Authoring of Herd Animations › hal-02127824 › file › main.pdf · Image-based Authoring of Herd Animations Pierre Ecormier-Nocca1, Julien Pettre´2, Pooran Memari

aware insect swarm simulation. ComputerGraphics Forum, 33(6), 2014.

[7] Weizi Li, David Wolinski, Julien Pettre,and Ming C Lin. Biologically-inspired vi-sual simulation of insect swarms. In Com-puter Graphics Forum, volume 34, 2015.

[8] Qin Gu and Zhigang Deng. Generat-ing freestyle group formations in agent-based crowd simulations. IEEE ComputerGraphics and Applications, 33(1), 2013.

[9] Mingliang Xu, Yunpeng Wu, YangdongYe, Illes Farkas, Hao Jiang, and ZhigangDeng. Collective crowd formation trans-form with mutual information-based run-time feedback. Computer Graphics Fo-rum, 34(1):60–73, February 2015.

[10] Jiayi Xu, Xiaogang Jin, Yizhou Yu, TianShen, and Mingdong Zhou. Shape-constrained flock animation. ComputerAnimation and Virtual Worlds, 19(3-4),2008.

[11] Xinjie Wang, Linling Zhou, ZhigangDeng, and Xiaogang Jin. Flock morphinganimation. Computer Animation and Vir-tual Worlds, 25(3-4):351–360, May 2014.

[12] Kang Hoon Lee, Myung Geol Choi, Qy-oun Hong, and Jehee Lee. Group behav-ior from video: a data-driven approach tocrowd simulation. In SCA’2007, 2007.

[13] Eunjung Ju, Myung Geol Choi, MinjiPark, Jehee Lee, Kang Hoon Lee, and Shi-geo Takahashi. Morphable crowds. ACMPress, 2010.

[14] Laurent Favreau, Lionel Reveret, Chris-tine Depraz, and Marie-Paule Cani. Ani-mal gaits from video. Graphical Models,68(2), 2006.

[15] Xuemiao Xu, Liang Wan, Xiaopei Liu,Tien-Tsin Wong, Liansheng Wang, andChi-Sing Leung. Animating animal mo-tion from still. ACM Trans. Graph. (Sig-graph Asia issue), 27(5), 2008.

[16] Arnaud Emilien, Ulysse Vimont, Marie-Paule Cani, Pierre Poulin, and Bedrich

Benes. WorldBrush: interactive example-based synthesis of procedural virtualworlds. ACM Transactions on Graphics,34(4), 2015.

[17] James Gain, Harry Long, Guillaume Cor-donnier, and Marie-Paule Cani. Eco-Brush: interactive control of visually con-sistent large-scale ecosystems. Eurograph-ics, 36(2), 2017.

[18] Cengiz Oztireli and Markus Gross. Anal-ysis and synthesis of point distributionsbased on pair correlation. ACM Transac-tions on Graphics (TOG), 31(6), 2012.

[19] Riccardo Roveri, A. Cengiz Oztireli, andMarkus Gross. General point samplingwith adaptive density and correlations.Computer Graphics Forum, 36(2), 2017.

[20] Chongyang Ma, Li-Yi Wei, and Xin Tong.Discrete element textures. 2011.

[21] Pierre-Edouard Landes, Bruno Galerne,and Thomas Hurtut. A shape-aware modelfor discrete texture synthesis. In ComputerGraphics Forum, volume 32, 2013.

[22] Pierre Ecormier-Nocca, Pooran Memari,James Gain, and Marie-Paule Cani. Accu-rate synthesis of multi-class disk distribu-tions. Computer Graphics Forum, 38(2),2019.

[23] Nicolas Bonneel, Michiel van de Panne,Sylvain Paris, and Wolfgang Heidrich.Displacement interpolation using la-grangian mass transport. ACM Trans.Graph., 30(6):158:1–158:12, December2011.