deformable object registration/fitting (chavo, tum) grasp selection (beatriz, uji) differently...

7
Deformable object registration/fitting (Chavo, TUM) Grasp selection (Beatriz, UJI) Differently scaled object model DB (Walter, TUW) Classification and pose estimation (Walter, Aitor TUW, Marianna, KTH) Segmentation (Walter, TUW) Robot model (Otto-Bock, KIT) segmented point clouds RGB-D image with 1 to 5 objects of certain class (approx. 100 categories) Categorized models with estimated pose and scale Models fitted to the point cloud in a deformable manner Best ranked grasp hypothesis Path planning (Beatriz, UJI) Grasp hypotheses DB (Beatriz, UJI) Reachable? no Select a different object in the scene Execute grasp and task (TUW, Kuka, Otto-Bock) yes Trajectory Scenario 1: Class-based Grasping

Upload: roberta-shelton

Post on 13-Dec-2015

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Deformable object registration/fitting (Chavo, TUM) Grasp selection (Beatriz, UJI) Differently scaled object model DB (Walter, TUW) Classification and

Deformable object registration/fitting(Chavo, TUM)

Grasp selection(Beatriz, UJI)

Differently scaledobject model DB(Walter, TUW)

Classification and pose estimation(Walter, Aitor TUW, Marianna, KTH)

Segmentation(Walter, TUW)

Robot model(Otto-Bock, KIT)

segmented point clouds

RGB-D image with 1 to 5objects of certain class(approx. 100 categories)

Categorized models withestimated pose and scale

Models fitted to the point cloudin a deformable manner

Best ranked grasphypothesis

Path planning(Beatriz, UJI)

Grasp hypotheses DB(Beatriz, UJI)

Reachable?

no

Select a differentobject in the scene

Execute grasp and task(TUW, Kuka, Otto-Bock)

yes

Trajectory

Scenario 1: Class-based Grasping

Page 2: Deformable object registration/fitting (Chavo, TUM) Grasp selection (Beatriz, UJI) Differently scaled object model DB (Walter, TUW) Classification and

SCENARIO 1SCENARIO 1

- How should the systems be adapted to vienna hw?

– backup platforms? Arm and hand models?

-- Finish working with ARMAR and UJI;

--task-based adaptation in Vienna UNLIKELY due to time constraints (robot models unavailable yet)

-- Different grasps will be executed on the Michelangelo category-based, not task-based

-- Can a database be collected by Vienna for training Marianna’s system? (kinect and plane-based segmentation)

- Include task into the flow chart

- Who is giving the task? Through what channel (speech, keyboard, etc)?

- If possible, Task-based execution coming from Scenario3

- SPEED?

- Walter is the interfacing/integration responsible

Page 3: Deformable object registration/fitting (Chavo, TUM) Grasp selection (Beatriz, UJI) Differently scaled object model DB (Walter, TUW) Classification and

Input 1: RGB-D imageof scene with basket and unknown objects

Output: Execute onTUW Amtec robot and old OB hand

3D part detection(TUW(Karthik))

3D part detection(TUW(Karthik))

Basket detection and attention points(TUW(Kate))

Basket detection and attention points(TUW(Kate))

Yes

Attention points

Superquadric

Trajectoriesin Joint-Space

Reachable?No

Scenario 2: Empty the Basket

Ranked list of Grasp Point + Approach vector

Online path planning(UJI(Beatriz))

Online path planning(UJI(Beatriz))

Select next region in image

(under)Segmentation of object(s) in basket(TUW(Kate), KTH?)

(under)Segmentation of object(s) in basket(TUW(Kate), KTH?)

Region hypotheses

Grasp Hypothesis Generation(TUW(Karthik))

Grasp Hypothesis Generation(TUW(Karthik))

Basket Pose

Superquadric mesh& Pile mesh

Page 4: Deformable object registration/fitting (Chavo, TUM) Grasp selection (Beatriz, UJI) Differently scaled object model DB (Walter, TUW) Classification and

SCENARIO 2SCENARIO 2- robustness of attention point detection and basket detection

– Should be OK as soon as basket points are filtered out. Detection of points slightly out of the pile due to randomness will be solved. Work will be put in speeding it up. ROS package

- should there be a simple milestone before the demo?

- plan B: - empty a basket of known objects?

- part-based grasping of easily segmentable unknown objects

- use attention points (TUW) to provide approach vectors to UJI emptying-box approach?

• Kate will send Javier (UJI) code (ROS package) for the attention points

- SEGMENTATION WON'T WORK ROBUSTLY. Is robust segmentation critical for part detection?

– Segmentation actually looks reasonably good. Plus multiple view could improve robustness, human could enter the scene and rotate the basket.

- Optimization (~20s right now) and integration (ROS-Matlab) required

- 3DPart detection seems to be working in not too crowded scenes, not tested in extra-crowded boxes (probably not needed)

-Open-loop grasping? Calibration.

- ??

- SPEED?

- some modules need to be sped up

- UJI still has to provide the path planning based on the pile mesh and superquadric

- Vienna should know by 30/11 more about the feasibility of this scenario

- Software interface of Trajectory from OpenRAVE and physical robot

TECHNICAL ANNEX:”Grasping any object by building up relations between task setting, embodied hand actions, object attributes, and contextual knowledge such that learned grasps are extendable towards new, never seen objects in new situations […] Progress will be measured along two benchmarks as exemplars of industrial and home tasks:

– Emptying a box of industrial parts and

– Emptying a basket of grocery item

- David (TUW) is integration responsible

Page 5: Deformable object registration/fitting (Chavo, TUM) Grasp selection (Beatriz, UJI) Differently scaled object model DB (Walter, TUW) Classification and

Input: Scene with known objects

(as many as possible)

Output: Task for which the observed grasp is good for

Hand/object Tracking(Iason, Niko, FORTH)

Hand/object Tracking(Iason, Niko, FORTH)

Object Recognition & Pose Estimation(Chavo, TUM)

Object Recognition & Pose Estimation(Chavo, TUM)

Task Recognition(Dan, KTH)

Task Recognition(Dan, KTH)

Which objects, where placed in the scene

Object category,Size, Convexity, etc

Object-centered hand pose (position/orientation)

Task Recognition on Human Demo

Object Model DBwith Object Features Object Model DB

with Object Features Object index

Possible objects: hammer, knife, screwdriver, mug, bottle

Possible tasks: pouring, tool-use, dishwashing, playing

Page 6: Deformable object registration/fitting (Chavo, TUM) Grasp selection (Beatriz, UJI) Differently scaled object model DB (Walter, TUW) Classification and

SCENARIO 3SCENARIO 3

-(Chavo-FORTH)Hand initialization?

-Multi-object tracking? Should we go for single object? Multi-hand?

- Dan needs object and hand pose. FORTH-KTH different hand models?

- Maybe use task information for scenario 1.

- Which sensors will be used? Chavo prefers KINECT, Forth might work with KINECT, but has worked previously with portable multicam setup.

- Iason is responsible for integration

Page 7: Deformable object registration/fitting (Chavo, TUM) Grasp selection (Beatriz, UJI) Differently scaled object model DB (Walter, TUW) Classification and

Video-Demos

- Ville: adaptive grasping?

– Combined LUT and UJI? not implemented in ARMAR

– Collaboration with Vienna?

- UJI and ARMAR task and category-based grasping videos.

- Video on surprise grasping

– Update scenario knowledge, collaboration between FORTH and TUM

- Videos from individual partners

- Benchmarking environment (KIT)

– Different physics engines for dynamic grasping environments

– Object reconstruction from incomplete pointclouds

– Grasp planners on incomplete pointclouds