learning for active 3d mapping - cmpcmp.felk.cvut.cz/~zimmerk/iccv_2017/iccv17_poster.pdf ·...

Post on 18-Aug-2018

232 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

International Conference on Computer Vision 2017

Learning for Active 3D Mapping

Karel Zimmermann, Tomáš Petříček, Vojtěch Šalanský, Tomáš Svoboda Czech Technical University in Prague

Problem definition Motivation: New Solid State Lidars will allow independent steering of depth measuring rays.

Goal 1. Learn to reconstruct dense 3D voxel map from sparse depth

measurements 2. Optimize reactive control of depth-measuring rays along an

expected vehicle trajectory

Overview of active 3D mapping pipeline

➢ Learning of and planning of optimize common objective:

Learning of 3D mapping network ➢ Learning searches for parameters ➢ Result of planning is not differentiable wrt ➢ Objective is first locally approximated around an initial point

and then optimized by SGD

➢ The optimization is iterated

➢ Fix point of this mapping would assure: 1. local optimality of the objective 2. statistical consistency of the learning process ➢ In practise, we iterate until validation error stops decreasing

Planning of depth measuring rays ➢No ground truth available online ➢Objective approximated from currently available map

➢Planning of rays for following L positions si approximated as

Algorithm properties ➢ 500x less reestimations ➢ 50x faster then naive ➢ 50ms for planning

(1e6 voxels, 1e5 rays) ➢Derived upper bound on

the approximation ratio

Experiments Structure of 3D mapping net Qualitative evaluation

Quantitative evaluation ➢Training: 20 seq. from “Kitty Residential category” ➢Testing: 13 seq. from “Kitty: City category” ➢Local maps 320x320x32 voxels

(1voxel~20cm 64m x 64m x 6.4m) ➢Selected K=200 rays per position out of 19200 ➢Horizont of L=5 positions

Acknowledgement: The research leading to these results has received funding from the European Union under grant agreement FP7-ICT-609763 TRADR and No.692455 Enable-S3, from the Czech Science Foundation under Project 17-08842S, and from the Grant Agency of the CTU in Prague under Project SGS16/161/OHK3/2T/13. Images of S3 Lidar redistributed with permission of Quanergy Systems (http://quanergy.com)

=✓t+1 argmin✓

X

voxels

L(y(x(✓t), ✓), y⇤)

✓0 ✓1 ✓2 ✓t ✓t+1

… …… …

✓x(✓)

argmin

J

X

voxels

✏iY

j2J

pij subject to |J`| K

)

[1] Choy et al., A unified approach for single a multi-view3D object reconstruction, ECCV, 2016

Full setting

See video online:

Emitted laser beams are splitted

Transimtted through Optical Phased Array

Controlling optical properties of OPA elements, allows to steer laser beams into desire directionReflected laser beams captured by SPAD array

Prioritized Greedy algorithm

UB

⇣proposed

optimal

optimal coverage

30%

planning

learning

current loss in voxel iprob. that voxel iis not visible by any ray j 2 J

Y

j2J

pij

Expected loss in voxel i after casting rays J

argmin

X

voxels

L�y(x(✓), ✓), y⇤�

subject to |J`(✓) K|

J✓

Limited setting

top related