hybrid position-based visual servoing

28
Hybrid Position-Based Hybrid Position-Based Visual Servoing Visual Servoing Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash University, Australia Visual Perception and Robotic Manipulation Springer Tracts in Advanced Robotics Chapter 6 Chapter 6 Geoffrey Taylor Lindsay Kleeman

Upload: nevina

Post on 12-Jan-2016

40 views

Category:

Documents


3 download

DESCRIPTION

Visual Perception and Robotic Manipulation Springer Tracts in Advanced Robotics. Hybrid Position-Based Visual Servoing. Chapter 6. Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash University, Australia. Geoffrey Taylor - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Hybrid Position-Based Visual Servoing

Hybrid Position-Based Hybrid Position-Based Visual ServoingVisual Servoing

Intelligent Robotics Research Centre (IRRC)

Department of Electrical and Computer Systems Engineering

Monash University, Australia

Visual Perception and Robotic Manipulation

Springer Tracts in Advanced Robotics

Chapter 6Chapter 6

Geoffrey Taylor

Lindsay Kleeman

Page 2: Hybrid Position-Based Visual Servoing

2Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics

OverviewOverview

• Motivation for hybrid visual servoing

• Visual measurements and online calibration

• Kinematic measurements

• Implementation of controller and IEKF

• Experimental comparison of hybrid visual servoing with existing techniques

Page 3: Hybrid Position-Based Visual Servoing

3Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics

MotivationMotivation

• Manipulation tasks for a humanoid robot are characterized by:– Autonomous planning

from internal models

– Arbitrarily large initial pose error

– Background clutter and occluding obstacles

– Cheap sensors camera model errors

– Light, compliant limbs kinematic calibration errors

Metalman: upper-torso humanoid hand-eye system

Page 4: Hybrid Position-Based Visual Servoing

4Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics

Visual ServoingVisual Servoing

• Image-based visual servoing (IBVS):– Robust to calibration errors if target image known– Depth of target must be estimated– Large pose error can cause unpredictable trajectory

• Position-based visual servoing (PBVS):– Allows 3D trajectory planning– Sensitive to calibration errors– End-effector may leave field of view

• Linear approximations (affine cameras, etc)• Deng et al (2002) suggest little difference

between visual servoing schemes

Page 5: Hybrid Position-Based Visual Servoing

5Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics

Conventional PBVSConventional PBVS

• Endpoint open-loop (EOL):– Controller observes only the target

– End-effector pose estimated using kinematic model and calibrated hand-eye transformation

– Not affected by occlusion of the end-effector

• Endpoint closed-loop (ECL):– Controller observes both target and end-effector

– Less sensitive to kinematic calibration errors but fails when the end-effector is obscured

– Accuracy depends on camera model and 3D pose reconstruction method

Page 6: Hybrid Position-Based Visual Servoing

6Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics

Proposed SchemeProposed Scheme

• Hybrid position-based visual servoing using fusion of visual and kinematic measurements:– Visual measurements provide accurate positioning

– Kinematic measurements provide robustness to occlusions and clutter

– End-effector pose is estimated from fused measure-ments using Iterated Extended Kalman Filter (IEKF)

– Additional state variables included for on-line calibration of camera and kinematic models

• Hybrid PBVS has the benefits of both EOL and ECL control and the deficiencies of neither.

Page 7: Hybrid Position-Based Visual Servoing

7Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics

Coordinate FramesCoordinate Frames

EOLECLHybrid

Page 8: Hybrid Position-Based Visual Servoing

8Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics

• Conventional approach (Hutchinson et al, 1999).

• Control error (pose error):

• WHE estimated by visual/kinematic fusion in IEKF.

• Proportional velocity control signal:

PBVS ControllerPBVS Controller

OGG

OGG

OG

OGG

k

k

TΩTV

2

1

OW

GE

EW

OG H)HH(H 1

Page 9: Hybrid Position-Based Visual Servoing

9Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics

ImplementationImplementation

Page 10: Hybrid Position-Based Visual Servoing

10Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics

Visual MeasurementsVisual Measurements

• Gripper tracked using active LED features, represented by an internal point model

• IEKF measurement model: iE

EW

WCRL

iRL Gg HHPˆ ,,

Gi

C

gi

image plane

camera centre

3D gripper model

measurements

Page 11: Hybrid Position-Based Visual Servoing

11Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics

Camera Model ErrorsCamera Model Errors

• In practical system, baseline and verge angle may not be known precisely.

2b

left camera centre

right camera centre

left image plane

right image plane

reconstruction

2b*

scaled reconstruction

-*

affine reconstruction

Page 12: Hybrid Position-Based Visual Servoing

12Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics

• How does scale error affect pose estimation?

• Consider the case of translation only by TE:

– Predicted measurements:

– Actual measurements:

– Relationship between actual and estimated pose:

• Estimated pose for different objects in the same position with same scale error is different!

Camera Model ErrorsCamera Model Errors

)ˆ(HHPˆ ,,Ei

EE

WW

CRLi

RL TGg

)(HHP 1,,

EiE

EW

WCRL

iRL K TGg

EEiE KbKf TTGT 11 ),,,(ˆ

Page 13: Hybrid Position-Based Visual Servoing

13Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics

Camera Model ErrorsCamera Model Errors

• Scale error will cause non-convergence of PBVS!

• Although the estimated gripper and object frames align, the actual frames are not aligned.

Page 14: Hybrid Position-Based Visual Servoing

14Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics

• To remove model errors, scale term is estimated by IEKF using modified measurement equation:

• Scale estimate requires four observed points with at least one in each stereo field.

Visual MeasurementsVisual Measurements

)ˆ(HHPˆ 1,,

iE

EW

WCRL

iRL K Gg

Page 15: Hybrid Position-Based Visual Servoing

15Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics

• Kinematic measurement from PUMA is BHE

• Measurement prediction (for IEKF):

• Hand-eye transformation BHW is treated as a dynamic bias and estimated in the IEKF

• Estimating BHW requires visual estimation of WHE, and is therefore dropped from the state vector if the gripper is obscured.

Kinematic ModelKinematic Model

EW

WB

EB HHH

Page 16: Hybrid Position-Based Visual Servoing

16Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics

• Kalman filter state vector (position, velocity, calibration parameters):

• Measurement vector (visual + kinematic):

• Dynamic models:– Constant velocity model for pose– Static model for calibration parameters

• Initial state from kinematic measurements.

Kalman FilterKalman Filter

TW

BE

WE

W kKkkkk ))(),(),(),(()( 1prpx

TE

BRL kkkk ))(,),(),(()( 00 pggy

Page 17: Hybrid Position-Based Visual Servoing

17Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics

ConstraintsConstraints

• Three points required for visual pose recovery

• Stereo measurements required for scale estimation

• LED association required multiple observed LEDs

• Estimation of BHW requires visual observations

• Use a hierarchy of estimators (nL,R = no. points):

– nL,R < 3: EOL control, no estimation of K1 or BHW

– nL > 3 xor nR > 3: Hybrid control, no K1

– nL,R > 3: Hybrid control (visual + kinematic)

• Excluded state variables are discarded by setting rows and columns of Jacobian to zero

Page 18: Hybrid Position-Based Visual Servoing

18Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics

LED MeasurementLED Measurement

• LEDs centroids measured with red colour filter

• Measured and model LEDs associated using a global matching procedure.

• Robust global matching requires 3 LEDs.

Predicted LEDs

Observed LEDs

Page 19: Hybrid Position-Based Visual Servoing

19Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics

Experimental ResultsExperimental Results

• Positioning experiment:– Align midpoint between

thumb and forefinger at coloured marker A

– Align thumb and forefingeron line between A and B

• Accuracy evaluation:– Translation error: distance between midpoint of

thumb/forefinger and A

– Orientation error: angle between line joining thumb/forefinger and line joining A/B

Page 20: Hybrid Position-Based Visual Servoing

20Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics

Positioning AccuracyPositioning Accuracy

Hybrid controller, initial pose(right camera only)

Hybrid controller, final pose(right camera only)

Page 21: Hybrid Position-Based Visual Servoing

21Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics

Positioning AccuracyPositioning Accuracy

ECL controller, final pose(right camera only)

EOL controller, final pose(right camera only)

Page 22: Hybrid Position-Based Visual Servoing

22Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics

Positioning AccuracyPositioning Accuracy

• Accuracy measured over 5 trial per controller.

Page 23: Hybrid Position-Based Visual Servoing

23Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics

Tracking RobustnessTracking Robustness

Initial pose: gripper outside FOV (ECL control)

Final pose: gripper obscured (Hybrid control, mono)

Gripper enters field of view (Hybrid control, stereo)

Page 24: Hybrid Position-Based Visual Servoing

24Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics

Tracking RobustnessTracking Robustness

Translational component of pose error

Estimated scale (camera calibration parameter)

EOLHybridstereo

Hybridmono EOL

Hybridstereo

Hybridmono

Page 25: Hybrid Position-Based Visual Servoing

25Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics

Baseline ErrorBaseline Error

• Error introduced in calibrated baseline:– Baseline scaled between 0.7 to 1.5

• Hybrid PBVS performance in presence of error:

Page 26: Hybrid Position-Based Visual Servoing

26Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics

Verge ErrorVerge Error

• Error introduced in calibrated verge:– Offset between –6 to +8 degrees

• Hybrid PBVS performance in presence of error:

Page 27: Hybrid Position-Based Visual Servoing

27Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics

Servoing TaskServoing Task

Page 28: Hybrid Position-Based Visual Servoing

28Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics

ConclusionsConclusions

• We have proposed a hybrid PBVS scheme to solve problems in real-world tasks:– Kinematic measurements overcome occlusions

– Visual measurements improve accuracy and overcome calibration errors

• Experimental results verify the increased accuracy and robustness compared to conventional methods.