markerless motion capture from single or multi-camera video
Embed Size (px)
Markerless Motion Capture from Singleor Multi-Camera Video Sequence
F. RemondinoIGP - ETH Zurich, [email protected]
N. D’ApuzzoHomometrica Consulting
G. Schrotter, A. RoditakisIGP - ETH Zurich, CH
AbstractThe modeling of human characters consists oftwo distinguished processes, namely thedefinition of 3D shape and 3D movement ofthe body. To achieve realism, both shape andmotion information should be acquired fromreal persons. The two procedures are usuallyconsidered separately, by using full bodyscanners and motion capture systems. In thispaper, we report about deterministic methodsdeveloped at IGP to perform the two modelingprocesses by using a unique set of data.Monocular or multi-camera videogrammetry infact allow for human body modeling andmotion reconstruction at the same time. Thetwo reliable and accurate photogrammetricprocedures are introduced and examples arepresented.
Keywords: photogrammetry, matching,capturing techniques, virtual human
1. IntroductionThe realistic modeling of human charactersfrom video sequences is a challenging problemthat has been investigated a lot in the lastdecade. Recently the demand of 3D humanmodels is drastically increased for applicationslike movies, video games, ergonomic, e-commerce, virtual environments and medicine.A complete human model consists of both 3Dshape and movements of the body: most of theavailable systems consider these two modelingprocedures as separate even if they are veryclosed. A standard approach to capture thestatic 3D shape (and colour) of an entire humanbody uses laser scanner technology : it isquite expensive but it can generate a wholebody model in about 20 seconds. Afterwardsthe 3D shape can be animated (articulating the
model and changing its posture)  ordressed for garment models . On theother hand, precise information related tocharacter movements is generally acquiredwith motion capture techniques: they involveelectro-magnetic sensors  or a network ofcameras  and prove an effective andsuccessfully mean to replicate humanmovements. In between, single- or multi-stations videogrammetry offers an attractivealternative technique, requiring cheap sensors,allowing markerless tracking and providing, atthe same time, for 3D shapes and movementsinformation. Model-based approaches are verycommon, in particular with monocular videostreams , while deterministicapproaches are almost neglected, often due tothe difficulties in recovering the cameraparameters and because of human limbsocclusions. Generally computer visiontechniques, image cues, backgroundsegmentation, prior knowledge about humanmotion, pre-defined articulated body modelsand no camera model are used to recovermotions and 3D information from monocularsequences . A linear combination of3D basis shapes  has shown to achievequite good results, even if it was not applied tothe full body tracking and reconstruction.On the other hand, multi-cameras approaches are employed to increasereliability, accuracy and avoid problems withself-occlusions. New solutions were recentlypresented employing multiple synchronizedvideo cameras and structured light projectors. They assure an accurate dynamic surfacemeasurement. However, to date, thetechnology limits the acquisition to small areasof the human body, as for example the face.In this paper we report about human bodymodeling and motion reconstruction from
Figure 1: Some frames (720x576 pixel) of a moving character extracted from an old video-tape.
uncalibrated monocular videos as well as frommulti-camera image sequences. Theapproaches presented here combine many ideasand algorithms that have been developed in therecent years at the Institute of Geodesy andPhotogrammetry (ETH Zurich). The goal is tobring them together and show reliable andaccurate photogrammetric procedures torecover 3D data and generate virtual charactersfrom videos for visualization and animationpurposes. The reality-based virtual humans canbe used in areas like film production,entertainment, fashion design and augmentreality while the recovered 3D positions couldserve as basis for the analysis of humanmovements or medical studies. The analysis ofexisting monocular videos can furthermoreallow the generation of 3D models ofcharacters who may be long dead orunavailable for common modeling techniques.
2. Monocular VideosA moving character imaged with one movingcamera (Figure 1) represents the most difficultcase for the deterministic 3D reconstruction ofits poses. To avoid copyright problems,existing sport videos are considered. They areusually acquired with a stationary but freelyrotating camera and with a very short baselinebetween the frames.
: fully automated : semi-automated = manual
Figure 2: Workflow for character modelingand animation from monocular videos.
The full reconstruction and modeling process(Figure 2) consists of (1) calibration andorientation of the images, (2) pose estimationand human skeleton reconstruction and (3)character modeling and animation. Thephotogrammetric calibration and orientation ofthe video is performed with a photogrammetricbundle adjustment . The procedure isrequired to achieve the camera parametersnecessary for the determination of the scene’smetric information and for the human’s posesestimation. Tie points are measured semi-automatically in the images by means oftemplate Least Squares Matching (LSM) and imported in the adjustment as weightedobservations. All the unknown parameters aretreated as stochastic variables whilesignificance tests are applied for thedeterminability of the camera parameters.Once the sequence is oriented, some key-frames where the 3D poses will be recovered,are selected (while in the other intermediateframes the 3D positions will be automaticallyinterpolated). In each key-frame the humanbody is firstly reconstructed in a skeleton form,with a series of joints connected with segmentsof known relative lengths. The human jointsare measured semi-automatically through theframes with LSM. Then a scaled orthographicprojection, together with constraints on jointsdepth and segment’s perpendicularity, isapplied to obtained accurate and reliable 3Dmodels .
Figure 3: Recovered camera poses andreconstructed moving character in skeleton form.
Figure 4: Some frames of the animation created fitting an H-ANIM model onto the recovered 3D poses.
For each key-frame, the recovered 3D humanskeleton is afterwards transformed to theabsolute reference system with a 3D-conformaltransformation and the 3D coordinates arerefined within a bundle adjustment using therecovered camera parameters (Figure 3).Finally, to improve the visual quality andrealism of the model, an H-ANIM virtualcharacter  or a laser scanner polygonalmodel  can be fitted onto the recovered 3Ddata. The fitting and animation processes areperformed with the animation features of Maya5.0 software  (Figures 4 and 5).The recovered virtual character can be used foraugmented reality applications, personsidentification or to generate new scenesinvolving models of characters who are dead orunavailable for common modeling systems.
Figure 5: Two views showing the results of thefitting process with a polygonal model.
3. Multi-camera VideosMulti-stations videogrammetry are usuallyemployed to avoid limbs occlusions andincrease reconstruction’s reliability. Ourmethod is composed of five steps: (1)acquisition of video sequences, (2) calibrationof the system, (3) surface measurement of thehuman body in each frame, (4) 3D surfacetracking and filtering, (5) tracking of keypoints. Multiple synchronized cameras acquiresimultaneously sequences of a scene fromdifferent direction (multi-image sequence). Inthe presented example, three synchronizedprogressive scan CCD cameras (640x480) in atriangular arrangement were used. The imaged
person does not necessarily need to be naked.A photogrammetric self-calibration method is used to determine very accurately theexterior and interior camera parameters as wellas some additional parameters modelling thedistortion caused by the lenses. Afterwards, asurface measurement, based on multi-imageLSM  with the additional geometricalconstraint of the matched point to lie on theepipolar line, is performed. The automaticmatching process  determines a dense androbust set of corresponding points in theimages starting from few seed pointsautomatically selected in the region of interest(spatial matching). The 3D coordinates of thematched points are then computed by forwardray intersection using the orientation andcalibration data of the cameras (Figure 6).
Figure 6: Surface measurement. Top: imagetriplet. Bottom: computed 3D point cloud.
This process is performed for each imagetriplet of the multi-image sequence, resulting ina dynamic measurement. Figure 7 shows thedetermined 3D point clouds in some frames.As next step, a tracking process, also based onLSM technique, is applied. Its basic idea is totrack the corresponding points of each tripletthrough the sequence and compute their 3Dtrajectories (temporal matching). The spatialcorrespondences between the triplets acquiredat the same time are therefore matched with thesubsequent frames (see Figure 8).
Figure 7: Dynamic surface measurement: 3Dpoint clouds for some frame of the sequence.
Figure 8: LSM tracking: temporal and spatialcorrespondences are established with LSM.
The results of the automated tracking processare the coordinates of a point in the threeimages through the sequence, thus its 3Dtrajectory is determined by forward rayintersection. Velocities and accelerations arealso computed.The advantage of this tracking process istwofold: it can track natural points, withoutusing markers; and it can track local surfaceson the human body. In the last case, thetracking process is applied to all the pointsmatched in the region of interest. The resultcan be seen as a vector field of trajectories(position, velocity and acceleration).To extract general motion information from thedense set trajectories, the key-points areintroduced. A key-point is a 3D regionmanually defined in the vector field oftrajectories, whose size can vary and whose
position is defined by its centre of gravity. Thekey-points are tracked in a simple way: theposition in the next time step is established bythe mean value of the displacement of all thetrajectories inside its region. Key-points can beplaced in such a way that their trajectories candescribe complex movements. An example isshown in Figure 9.
Figure 9: Example of key-points tracked on thehuman body.
The key-points represent in this case anapproximation for the joint trajectories. Thefinal result of the tracking process is shown inFigure 11 together with the correspondingframes; two view of the 3D trajectories of thekey-points are displayed in Figure 10.
Figure 10: View from the top (left) and from thefront (right) of the 3D trajectories of the key-points.
The key-points can afterwards be used forfurther visualization purposes. Using an H-ANIM human model and the freewareraytracer Povray (Persistence of Vision Ray-TracerTM), the different limbs of an H-ANIMcharacter can be automatically posed in thecorrect position described by the recoveredkey-points (see Figure 12).
Figure 11: Tracking key-points: some frames of the sequence and view of the tracked key-points.
Figure 12: Some frames showing the fitting of an H-ANIM character onto the recovered 3D data usingthe extracted key-points.
4. ConclusionsIn this paper two methods to perform bothhuman body modeling and motionreconstruction from uncalibrated monocularvideos as well as from multi-camera imagesequences were discussed. The methods arebased on reliable and accuratephotogrammetric procedures that recover the3D data from the images through a cameramodel. Videogrammetry is therefore apowerful and reliable solution for these kindsof applications. The obtained photogrammetricdata can be used for gait analysis,biomechanics studies or as motion input foravatars in virtual environments.
References Cyberware: http://www.cyberware.com
[Accessed November 2004] Vitus: www.vitus.de/english/home_en.html
[Accessed November 2004] B. Allen, B. Curless and Z. Popovic. Articulated
body deformation from range scan data. Proc.SIGGRAPH, pp.612-619, 2002
 H. Seo and N. Magnenat-Thalmann. AnAutomatic Modeling of Human Bodies fromSizing Parameters", ACM SIGGRAPHSymposium on Interactive 3D Graphics, pp.19-26, 2003
 X. Ju, N. Werghi and J.P. Siebert. AutomaticSegmentation of 3D Human Body Scans.International Conference on ComputerGraphics and Imaging, pp 239-244, 2000
 F. Cordier, N. Magnemat-Thalman. Real-timeanimation of dressed virtual humans. ComputerGraphics Forum, Blackwell, Vol.21(3), 2002
 F. Cordier, H. Seo and N. Magnenat-Thalmann.Made-to-Measure technologies for onlineclothing store. IEEE CG&A special issue on‘Web Graphics’ , pp.38-48, 2003
 Ascension: http://www.ascension-tech.com/[Accessed November 2004]
 Vicon: http://www.vicon.com [AccessedNovember 2004]
 Motion Analysis: http://www.motionanaly-sis.com/ [Accessed November 2004]
 C. Sminchisescu. Three Dimensional HumanModeling and Motion Reconstruction inMonocular Video Sequences Ph.D.Dissertation, INRIA Grenoble, 2002
 H. Sidenbladh, M. Black, and D. Fleet.Stochastic Tracking of 3D Human FiguresUsing 2D Image Motion. ECCV02, SpringerVerlag, LNCS 1843, pp. 702-718, 2000
 M. Leventon, W. Freeman. Bayesianestimation of 3D human motion from an imagesequence. TR-98-06, MERL, 1998
 M. Yamamoto, A. Sato, S. Kawada, T. Kondoand Y. Osaki. Incremental Tracking of HumanActions from Multiple Views. IEEE CVPRProc., 1998
 C. Bregler, A. Hertzmann, H. Biermann.Recovering Non-Rigid 3D Shape from ImageStreams. Proc. IEEE CVPR 2000
 L. Torresani, D. Yang, E. Alexander, C.Bregler. Tracking and Modeling non-rigidobjects with Rank Constraints. Proc. IEEECVPR 2001
 N. D'Apuzzo, R. Plänkers, P. Fua, A. Gruenand D. Thalmann. Modeling human bodiesfrom video sequences. In El-Hakim/Gruen(Eds.), Videometrics VI, Proc. of SPIE, Vol.3461, pp. 36-47, 1999
 D.M. Gavrila and L. Davis. 3D model-basedtracking of humans in action: a multi-viewapproach. IEEE CVPR Proc. pp. 73-80, 1996
 S. Vedula and S. Baker. Three DimensionalScene Flow. ICCV '99, Vol. 2, pp. 722-729.
 F. Remondino and N. Boerlin.Photogrammetric calibration of sequencesacquired with a rotating camera. Int. Archivesof Photogrammetry, Remote Sensing andSpatial Information Sciences, Vol. XXXIV,part 5/W16, 2004
 A. Gruen. Adaptive least squares correlation: apowerful image matching technique. SouthAfrican Journal of Photogrammetry, RemoteSensing and Cartography, Vol. 14(3), pp.175-187, 1985
 F. Remondino and A. Roditakis. HumanFigures Reconstruction and Modeling fromSingle images or Monocular Video Sequences.IEEE International 3DIM Conference, pp. 116-123, 2003
 H-Anim: http://www.h-anim.org [AccessedNovember 2004]
 Maya: http://www.aliaswavefront.com/[Accessed November 2004]
 H.G. Maas. Image sequence based automaticmulti-camera system calibration techniques.Int. Archives of Photogrammetry, RemoteSensing and Spatial Information Sciences, Vol.XXXII, part 5, pp. 763-768, 1998
 N. D'Apuzzo. Measurement and modeling ofhuman faces from multi images. Int. Archivesof Photogrammetry, Remote Sensing andSpatial Information Sciences, Vol. XXXV, part5, pp. 241-246, 2002
 L. Zhang, N. Snavely, B. Curless and S.M.Seitz. Spacetime faces: high-resolution capturefor modeling and animation. ACM SIGGRAPHProceeding, 2004