projection mapping implementation

20
Projection Mapping Implementation Enabling Direct Externalization of Perception Results and Action Intent to Improve Robot Explainability Zhao Han, Alexander Wilkinson, Jenna Parrillo, Jordan Allspaw, Holly A. Yanco November 13, 2020 AI-HRI 2020

Upload: others

Post on 02-Jun-2022

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Projection Mapping Implementation

Projection Mapping ImplementationEnabling Direct Externalization of Perception Results

and Action Intent to Improve Robot Explainability

Zhao Han, Alexander Wilkinson, Jenna Parrillo,Jordan Allspaw, Holly A. Yanco

November 13, 2020 AI-HRI 2020

Page 2: Projection Mapping Implementation

IntroductionImproving understanding of robots

● Improves trust in real-time● Leads to greater efficiency during more

difficult human-robot collaboration scenarios

Due to embodiment, HRI researchers focused on

● Enabling robots to communicate their intention through non-verbal means● E.g., eye gaze, arm movement

(Desai et al. HRI ‘13)

(Admoni et al. HRI ‘16)

(Moon et al. HRI ‘14)(Admoni, Scassellati. JHRI ‘17)

(Dragan et al. HRI ‘13)(Kwon et al. HRI ‘18)

2

Page 3: Projection Mapping Implementation

ProblemNon-verbal cues in human-life

● Plays an important role in supporting and improving communication

But can also cause confusion

E.g., how can a robot communicate

● Which objects it has detected?● Which one is it going to grasp?

Use eye gaze and arm motion?

● Pointing can be vague

Verbal explanation?

● Gestures can still be underspecified● It requires follow-up questions

(Admoni et al. AI-HRI ‘15)

3

Page 4: Projection Mapping Implementation

One Approach: Projection MappingProjection mapping of perception results and action intent

We present a tool for

● Implementing projection mapping● Using an off-the-shelf projector

We will discuss

● The high level architecture● Low level technical details● Hardware (robot) platform

Objectsdetected

Object to grasp

Open-sourced on GitHub: github.com/uml-robotics/projection_mapping

4

Page 6: Projection Mapping Implementation

Why Projection Mapping Implementation?Projection mapping in robotics is not a new idea, but rather relatively rare

● Most are simple navigation projection● Few are projection mapping

Implementation effort has been missing

● Blocks knowledge of the effects of accurate externalization through it

● Makes it hard to compare it with other methods

Chadalavada et al., ECMR ‘15.

Andersen et al., RO-MAN ‘16.

Watanabe et al., ROS ‘15.

Coovert et al., Computers in Human Behavior ‘14.

6

Page 7: Projection Mapping Implementation

Why Projection Mapping?1. Direct and accurate externalization

● Objects are directly externalized○ Perceived or to be

manipulated

● Completely removes the need for mental inference○ cf. non-verbal methods and

verbal explanations

2. More salient

● Compared to other direct methods, e.g., display screen

● Projection can be seen farther away

3. Eliminates mental mapping

● From another media, e.g., display screen○ Can cause misjudgment and○ Lead to undesired consequences

7

Page 8: Projection Mapping Implementation

Implementation Overview

Object Clusters (ROS point cloud pub.)

Rviz Visualization

Virtual Camera (Image pub.)

4. Subscribes to

3. Rviz plugin(rviz_camera_stream)

Image Viewer(Full screen GUI)

Projector Lens Intrinsics

(CameraInfo pub.)

1. Calibration

2. Subscribes to 6. Subscribes to

5. Virtual world

7. Output (HDMI)Pose is the same as projector’s(Pub.: ROS publisher)

Interactive Markers(Other displays in Rviz)

(Alternatives)

8

Page 9: Projection Mapping Implementation

Why This Projection Mapping Implementation Works?

Principal: a projector is the dual of a camera

● Camera maps 3D world coordinates to 2D image plane coordinates● Projector maps 2D image plane coordinates to 3D world coordinates

● In a camera, light rays from the world pass through the lense and hit sensor● In a projector, light rays pass through the lens and hit a surface in the world

The virtual camera with the same intrinsics and pose as the real projector

● Allows us to map points from the virtual world’s 3D space to the 3D space in the real world

Low level details next... 9

Page 10: Projection Mapping Implementation

Projector Selection ConsiderationProjection must be visible and legible under bright light conditions

● Unlike common uses: present slides and watch movie (lights are off or dimmed)● Robots often operate indoors with lights on or outdoor under sunshine

Brightness - Standard measure is ANSI lumens, total light output per unit of time (e.g., 3,800)

Contrast - Ratio between the darkest and lightest areas of an image (e.g., 22,000:1)

Projection technology - Choose DLP over LCD for higher brightness● DLP is based on Digital Micromirror Device (DMD)● It has a reflective and high-fill factor digital light switch

○ Active-matrix LCDs are transmissive, generated heat cannot be dispatched well - Hornbeck 1997

ViewSonicPA503W

10

Page 11: Projection Mapping Implementation

Virtual Camera: Connecting virtual & real worldsOur lab developed rviz_camera_stream Rviz plugin

● It outputs a 2D image of what a camera sees in the Rviz virtual world○ Open sourced at github.com/uml-robotics/rviz_camera_stream

The virtual camera is placed at the real projector’s pose

The pose is specified to the same robot transform hierarchy as perception sensor

● Everything in Rviz can be transformed from the perception sensor’s frame to the one of the virtual camera, such as point clouds and interactive markers.

● This allows the projection to be in the projector’s viewpoint of what the perception sensor sees.

11

Page 12: Projection Mapping Implementation

Projection Outputrviz_camera_stream publishes the image from the virtual camera

We used the image_view ROS package to subscribe and show the image

The image viewer GUI is made full screen

● No additional window managers(e.g., i3) are used

● In Ubuntu, one can enable the“Toggle full screen” keyboard shortcut to remove the window chrome

12

Page 13: Projection Mapping Implementation

Hardware PlatformWe demonstrate our implementation on a Fetch Robot

● A custom structure to mount the projector○ If mounted on its head, the neck won’t

bear the weight● To pan and tilt the projector, we attached a

ScorpionX MX-64 Robot Turret Kit

The projection mapping technique isrobot agnostic

13

Page 14: Projection Mapping Implementation

Being Robot AgnosticApplied it on an assistive robot before

Minimum Requirement: Projector frame integrated into robot’s TF tree

● Our setup: projector lens & attachment point (intermediate frame)

It does not mean that the projector must be attached to the robot; it only needs to be co-located with the robot● Convenient for robot arms

Wang et al., ISRR ‘19.14

Page 15: Projection Mapping Implementation

Steps to Implement Projection Mapping for Robotics

15

2. Get lens intrinsics

- Calibrate lense (guide on GitHub) - Publish it in a CameraInfo message

4. Add Rviz displays

- Point cloud clusters - Any other displays, e.g., interactive markers

1. Set up projector

- Buy and mount projector - Calculate transform

5. Output projection

- Point projector w/ turret unit- Output image w/ image_viewer

3. Set up virtual camera

- Use rviz_camera_stream - Subscribe to the CameraInfo message

Page 17: Projection Mapping Implementation

Thanks!

Open source github.com/uml-robotics/projection_mapping

Zhao Han [email protected] www.cs.uml.edu/~zhan

Lab website: robotics.cs.uml.edu

Projection Mapping Implementation: Enabling Direct Externalization of Perception Results and Action Intent to Improve Robot Explainability

Zhao Han, Alexander Wilkinson, Jenna Parrillo, Jordan Allspaw, and Holly A. Yanco

17

Page 18: Projection Mapping Implementation

Why This Projection Mapping Implementation Works?

Principal: a projector is the dual of a camera

● Camera maps 3D world coordinates to 2D image plane coordinates

● Projector reverses this process

● In a camera, light rays from the world pass through the lense and hit sensor● In a projector, light rays pass through the lens and hit a surface in the world

The virtual camera with the same intrinsics and pose

● allows us to map points from the virtual world’s 3D space to the 3D space in the real world.

Low level details next... 18

Page 19: Projection Mapping Implementation

More at High-LevelThe code repository1 includes

● sample Rviz config file,

● pcd file (point cloud),

● launch file to publish CameraInfo message and the projector pose,

● A well-documented readme file

1. github.com/uml-robotics/projection_mapping

19

Page 20: Projection Mapping Implementation

Steps to Implement Projection Mapping for Robotics

- Buy and install - Calculate transform

1. Set up projector

- Point projector w/ turret unit - Output image w/ image_viewer

5. Output projection

- Use rviz_camera_stream - Subscribe to the CameraInfo message

3. Set up Virtual camera

- Point cloud clusters - Any other displays, e.g., interactive markers

4. Add Rviz displays

- Calibrate lense (guide on GitHub) - Publish it in a CameraInfo message

2. Get lens intrinsics

20