bacs review meeting

23
BACS Review Meeting FCT-UC Jorge Dias 17 th – 19 th March 2008 Collège de France, Paris

Upload: gus

Post on 17-Jan-2016

43 views

Category:

Documents


0 download

DESCRIPTION

BACS Review Meeting. FCT-UC Jorge Dias 17 th – 19 th March 2008 Collège de France, Paris. FCT-UC Key Role within BACS for M13 – M24. Contributions in WP2: Bayesian models for sensor fusion (in collaboration with College de France). WP5.4: - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: BACS Review Meeting

BACSReview Meeting

FCT-UC

Jorge Dias

17th – 19th March 2008Collège de France, Paris

Page 2: BACS Review Meeting

2

FCT-UC Key Role within BACS for M13 – M24

• Contributions in WP2: Bayesian models for sensor fusion (in collaboration with

College de France).

WP5.4:• Computational Laban Movement Analysis (LMA) using the

Bayesian framework for Task 5.4.2 (Vision-based detection and reconstruction of human actions ( MPS))

• Task 5.4.3: Multi-modal sensor integration including ego-motion

WP6: D6.2: Robotic Implementation of Gaze Control and Image Stabilization

WP4 WP7 WP8

WP5 WP6

WP1 WP2 WP3

Dis

se

min

ati

on

an

d

Co

mm

un

ity

In

teg

rati

on

Bayesian approach to complex systems

Study of Artificial Cognitive Systems

Study of Living Systems

Real World Applications

Neural implementation of Bayesian computation

Bayesian model of systems level behaviors

Infe

ren

ce

an

d L

ear

nin

g i

n

co

mp

lex

Bay

es

ian

sy

ste

ms

To AllFrom

All

Page 3: BACS Review Meeting

3

FCT-UC Overview: Effort & Infrastructure

• List of people involved in BACS (changes M1-12 to M13-24) Paid by BACS

• Joerg Rett PhD Student 1,32• Filipe Ferreira PhD Student 6,6• José Prado PhD Student 7,79• Amilcar Pedrosa Technical Support 3,04• Hugo Faria Techinal Support 2,66• Luis Santos PhD Student 1,1• Alberto Neves PhD Student 4,95• Cátia Pinho PhD Student 4,4• Hadi Aliakbarpour PhD Student 2,64• Total 34,5

Own Resources• Jorge Dias Professor 2,4• Jorge Lobo Professor 2,4• J Filipe Ferreira PhD student 2,4• Luis Mirizola PhD student 2,4• Diego Faria PhD student 2,4• Total 12

• Infrastructure involved in BACS (M13 – M24) IMPEP Nicole platform for gesture recognition

Page 4: BACS Review Meeting
Page 5: BACS Review Meeting

5

WP 5, Task 5.4.2 Goal

Computational Laban Movement Analysis (LMA) using the Bayesian framework

Problem:

The research field of computational Human Movement Analysis is lacking a general underlying modeling language.

How to map the features into symbols?

How to model human behavior?

Solution:

A semantic descriptor allowing to recognize a sequence of symbols taken from an alphabet consisting of motion-entities.

Benefit:

Establish a set of labels for observable human behavior.

The possibility to build large databases with labeled training data.

Page 6: BACS Review Meeting

6

WP 5, Task 5.4.2 Goal

Computational Laban Movement Analysis (LMA) using the Bayesian framework

Laban Movement Analysis:

Model for human behaviour

Bayesian Model:

Probabilistic model to analyse

human interaction

Page 7: BACS Review Meeting

7

WP 5, Task 5.4.3 Goal

Multi-modal sensor integration including ego-motion

Biological Perception,Bayesian Model

Artificial Perception,Bayesian Model

Artificial Observer

SensorReadings

Artificial Perception

Human/BiologicalObserver

Perception Psychophysical Study Model Analysis

Artificial & Biological

Model Output Comparison

Ego-MotionIllusions, Conflicts & Ambiguities

Model Re-

evaluation

Model Re-

evaluation

Model

SynthesisM

odel

Synthesis

Sensation

3D Scene

& Moving Objectsw/ Static Objects

Biological Perception,Bayesian Model

Artificial Perception,Bayesian Model

Artificial Observer

SensorReadings

Artificial Perception

Human/BiologicalObserver

Perception Psychophysical Study Model Analysis

Artificial & Biological

Model Output Comparison

Ego-MotionIllusions, Conflicts & Ambiguities

Model Re-

evaluation

Model Re-

evaluation

Model Re-

evaluation

Model Re-

evaluation

Model

SynthesisM

odel

SynthesisM

odel

SynthesisM

odel

Synthesis

Sensation

3D Scene

& Moving Objectsw/ Static Objects

3D Scene

& Moving Objectsw/ Static Objects

Biomimetic Artificial Multimodal Perception Systems

How does the observer perceive:

his own motion (egomotion)

the 3D structure of all objects in the scene

the 3D trajectory and velocity of moving objects (independent motion)?

A moving observer observes a non-static 3D scene, possibly containing several moving objects:

Page 8: BACS Review Meeting

8

WP 5, Task 5.4.3 Goal

Multi-modal sensor integration including ego-motion• We mainly expect to contribute in developing novel

perceptual computational models which:

1.are based on the fusion of perceptual modalities of vision, audition, haptic and inertial sensing;

2.mimic as closely as possible biological multimodal perceptual fusion processes;

3.perform perceptual fusion within a Bayesian framework.

• and in the process to: implement unimodal perceptual modules for Bayesian cue

integration within each modality. implement and assemble modules with several existing

state-of-the-art computational models of visual, auditory, haptic and vestibular perception.

Page 9: BACS Review Meeting

9

WP 6, (D6.2) Goal

• Gaze control and image stabilization: rely on fusing inertial and visual sensing

modalities• human and biological system also combine the two

sensing modalities for the same goal.

contribution of psychophysical studies• Bayesian models have been successfully used to

explain psychophysical experimental findings

a robotic implementation using Bayesian inference.

Page 10: BACS Review Meeting

10

WP 5, Task 5.4.2 Achievements

Computational Laban Movement Analysis (LMA) using the Bayesian framework

TrackingLow Level Feature

Computation

3-D Points LLF

Bayesian Inference

LMA labels

Bayesian Inference

Classified behavior

Processes for online classification

Page 11: BACS Review Meeting

11

WP 5, Task 5.4.2 Results – Examples and Demos

02

46

810

1234E

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

E.Ti

1

2

1 3 5 7 9

E.Sp

1

2

1 3 5 7 9

Laban descriptors

Behavior hypothesis

Effort.Space

Effort.Time

K

0

1

2

3

4

1 3 5 7 9

Low-level features

Curvature

Acc

0

1

2

3

4

1 3 5 7 9

Speed Gain

Tracked positions

1520

25-10-5

05

1015

20

-25

-20

-15

-10

-5

0

5

10

Move-ment

Space1 2 3 4 5 6 7 8 9 10

U U U UR L 0 0 R R 0

Page 12: BACS Review Meeting

12

WP 5, Task 5.4.2 Future Plans

• D5.17FCT-UC/Probayes: Publication on ‘Computational Laban Movement Analysis

based on Vision and 3-D Position Estimation’ T31

• D5.18FCT-UC/Probayes: Publication on ‘Bayesian Model for Computational Laban

Movement Analysis’ T33

• D5.20FCT-UC/Probayes: Publication on ‘Computational Laban Movement Analysis

using Multi-Camera Systems’ T36

Page 13: BACS Review Meeting

13

WP 5, Task 5.4.3 Achievements

• Experimental Setup: The Integrated Multimodal Perception Experimental Platform (IMPEP) current version operational and new one under construction

• Bayesian volumetric map (BVM), egocentric log-spherical, for multimodal perception of 3D structure and motion

An egocentric, log-spherical spatial memory map has been devised as a framework for multimodal sensor fusion, named the Bayesian Volumetric Map (BVM)

This map stores the independent probabilistic states of occupancy OC and velocity VC for each cell C in a volumetric grid with log-spherical configuration

Page 14: BACS Review Meeting

14

WP 5, Task 5.4.3 Results – Examples and Demos

• The Integrated Multimodal Perception Experimental Platform (IMPEP) .

• Artificial multimodal active perception system with gaze control capabilities for image stabilization and perceptual attention with:

a stereovision setup; a binaural setup; a motorised head platform, with inertial sensors

emulating the vestibular system.

current version new version under construction(PoP FP6-IST-2004-027268)

Page 15: BACS Review Meeting

15

WP 5, Task 5.4.3 Results – Examples and Demos

Hardware

Perceptual System

Multimodal Perception Module/Bayesian Volumetric Maps ( BVM) Gaze Control

Attention andTracking

BayesianGaze Control for

ImageStabilisation

Sensory Signals (time t) Motor Commands (time t+1)

FeatureMaps

Sensors and Interfaces Active Head Motors

Bayesian InertialModule

EstimationP(O

CV

C|Z C)

PredictionP(O

CV

C| C)

P(Z|OC

VC

C)Observation

Vision Module

MotionPerception Unit

BayesianVision Sensor

Model

StereovisionUnit

Auditory Module

MonauralCochlear (AIM)

Processor

BinauralProcessor

BayesianAudition

Sensor Model

[ , ]motor

Sensors and InterfacesSensors and Interfaces Active Head MotorsActive Head Motors

IMPEP/BVM Framework OverviewIMPEP/BVM Framework Overview

Page 16: BACS Review Meeting

16

WP 5, Task 5.4.3 Results – Examples and Demos

Using the BVM Framework for Entropy-Based Active Exploration Using the BVM Framework for Entropy-Based Active Exploration

C,

CGaze

Computation

ÑH(c)

Gaze Control

Motor Commands

Perceptual Scene

Bayesian Multimodal Perception

EstimationP(O

CV

C|Z C)

PredictionP(O

CV

C| C)

P(Z|OC

VC

C)

Observation

Z1...Z

S

H(c)

Page 17: BACS Review Meeting

17

WP 5, Task 5.4.3 Future Plans

• D5.15FCT-UC: Integrated multimodal perception experimental platform demo

V1.0 T27

• D5.16FCT-UC/INRIA/CNRS-Gren./Probayes: Publications on Bayesian models of multimodal perception of

3D structure and motion T30

• D5.21FCT-UC : Integrated Multimodal Perception Integrated Platform demo

v2.0 T36

• D5.25FCT-UC/ CNRS-LPPA: Publication on ‘Bayesian visuo-inertial gaze control’ T42

Page 18: BACS Review Meeting

18

WP 6, (D6.2) Achievements

• Robotic implementation of gaze control and image stabilization

• Simple probabilistic optical flow algorithm: population code-type data structure storing two-dimensional pdfs on the image

velocity space Du, Dv as an output. Primarily based on Zelek's adaptation of the block matching (correlation)

algorithm.• Bayesian program for processing of inertial data:

Bayesian model of the human vestibular system [Laurens and Droulez(2006)] adapted to the use of inertial sensors estimate the current angular position and angular velocity mimicking human vestibular perception

Probabilistic Block Matching Optical Flow

Page 19: BACS Review Meeting

19

WP 6, (D6.2) Results – Examples and Demos

• Text

We can see a strong correlation between the pan and yaw, and tilt and roll signals, since the pan&tilt is compensating the observed motion at a low sample rate.

The small remaining optical flow observed shows that the controller is working to some extent, although it is not fully reliable since abrupt motions are not observable.

Observed yaw and roll, pan and tilt motor control, and remaining observed optical flow.

Page 20: BACS Review Meeting

20

WP 6, (D6.2) Future Plans

Our work will focus on the multimodal sensor integration within WP5, and future work will address:

•Implement the image stabilization alrorithm with the new robotic system.

•Adding the magnetic data to our Bayesian implementation, providing a more robust attitude estimation.

•Focus on gaze control and attention models

•Contact partners within BACS (CNRS-LPPA): Following previous contacts, confront partners with our models and

implementation and discuss possible parallel trials of robotic and psychophysical experiments

Going beyond the initial implementation reported in D6.2, and propose joint work and subsequent publication D5.22 T24-T42 with FCT-UC/ CNRS-LPPA:

D5.25: Publication on ‘Bayesian visuo-inertial gaze control’ T-42

Page 21: BACS Review Meeting

21

WP 5Summary 1

• Major Achievements during M13-M24 Computational Laban Movement Analysis based on Vision

and 3-D Position Estimation Experimental Setup: The Integrated Multimodal Perception

Experimental Platform (IMPEP) current version operational and new one under construction

Bayesian volumetric map (BVM), egocentric log-spherical, for multimodal perception of 3D structure and motion

• List of Deliverables D5.7 (MPS) Report on computational human pose recovery in

clutter M18 D5.8 (FCT-UC) State of the art on (Artificial) 3D Structure and

Motion Multimodal Perception M15 D6.2 (FCT-UC) Robotic Implementation of Gaze Control and

Image Stabilization M18

Page 22: BACS Review Meeting

22

WP 5Summary 1

• Conference Rett, J., Dias, J.: Human-robot interface with anticipatory characteristics based on Laban

Movement Analysis and Bayesian models. In: Proceedings of the IEEE 10th International Conference on Rehabilitation Robotics (ICORR). (2007)

Rett, J., Dias, J.: Human Robot Interaction based on Bayesian Analysis of Human Movements. In: Proceedings of EPIA 07, Lecture Notes in AI, Springer Verlag, Berlin. (2007)

Luiz G. B. Mirisola, Jorge Lobo, and Jorge Dias. 3D map registration using vision/laser and inertial sensing. In European Conference on Mobile Robots (ECMR2007), Freiburg, Germany, Sep. 2007.

J. Dias, Carlos Simplicio, Diego R. Faria - 3D Photo-realistic talking head for human-robot interaction - Proceedings of the 3rd Internacional Conference on Advanced Research in Virtual and Rapid Prototyping, Leiria, Portugal, 24 to 29 September, 2007

F. Ferreira, V. Santos and Jorge Dias, Robust Place Recognition Within Multi-sensor View Sequences Using Bernoulli Mixture Models, The 6th IFAC Symposium on Intelligent Autonomous Vehicles, IAV 2007, Toulouse, France

Rett, J. and Dias, J. “Computational Laban Movement Analysis using probability calculus.” In the Proceedings of Workshop on Robotics and Mathematics, RoboMat 2007.

Rett, J. and Dias, J.:”Bayesian models for Laban Movement Analysis used in Human Machine Interaction.” Proceedings of ICRA 2007 Workshop on "Concept Learning for Embodied Agents“.

Ferreira, F. , Davim, L. , Rocha, R., Santos, V. and Dias, J.:”Using Local Features To Classify Objects Having Printable Codes”. Proceedings of the International Conference 5th Workshop on European Scientific and Industrial Collaboration on promoting Advanced Technologies in Manufacturing, WESIC

• Journal Jorge Lobo and Jorge Dias, "Relative Pose Calibration Between Visual and Inertial Sensors",

International Journal of Robotics Research, Special Issue 2nd Workshop on Integration of Vision and Inertial Sensors, vol.26, n.6, June 2007, pages 561-575.

Peter Corke, Jorge Lobo and Jorge Dias, "An introduction to inertial and visual sensing", International Journal of Robotics Research, Special Issue 2nd Workshop on Integration of Vision and Inertial Sensors, vol.26, n.6, June 2007, pages 519-535.

• Thesis Jorge Lobo, "Integraton of Vision and Inertial Sensing", PhD Thesis, Supervisor: Jorge Dias,

defended July 2007.

Page 23: BACS Review Meeting

23

WP 5Summary 2

• Major collaborations within BACS RDG2: collaboration between FCT-UC, IDIAP, ETH Zürich and MPS Tübingen was

initiated, with the purpose of developing tools for audiovisual VR world production, so as to create stimuli both for human perception studies (WP6.4) and for simulations related to this Task, so as to test and demo the artificial multimodal perception system. This ongoing effort has already produced a co-authored State-of-the-Art Report: “State of the Art on 3D Audiovisual APIs/SDKs for Stimulus Generation and Presentation”.

RDG3, a collaboration effort between the FCT-UC, INRIA Rhône-Alpes, Probayes and CNRS-Grenoble concerning the subject “Bayesian Models for Multimodal Perception of 3D Structure and Motion” involving a student exchange from M20 to M22 was undertaken, having been successfully completed and for which several deliverables are being finalised and expected to be ready by M24, namely:

• A final Technical Report, describing the work done during the exchange period and delineating future collaboration between these partners.

• 2 joint-publications, to be submitted for ICVS 2008 and CogSys 2008 …

• Major collaborations outside BACS FCT-UC collaborates with the Perception on Purpose project (PoP FP6-IST-2004-

027268, of which FCT-UC is also a partner) sharing common physical resources and know-how, namely in the construction of the new robotic vision head

• Summary of future plans Computational Laban Movement Analysis using Multi-Camera Systems Bayesian models of multimodal perception of 3D structure and motion Going beyond the initial implementation reported in D6.2, and propose joint work and

subsequent publication D5.22 T24-T42 with FCT-UC/ CNRS-LPPA: