high-precision globally-referenced position and attitude via a fusion of visual slam, carrier-phase-...

Post on 15-Jan-2016

224 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

High-Precision Globally-Referenced Position and Attitude via a Fusion of Visual SLAM,

Carrier-Phase-Based GPS, and Inertial Measurements

Daniel Shepard and Todd Humphreys

2014 IEEE/ION PLANS Conference, Monterey, CA | May 8, 2014

2 of 21

Globally-Referenced Visual SLAM

Motivating Application: Augmented Reality

Estimation Architecture

Bundle Adjustment (BA)

Simulation Results for BA

Overview

3 of 21

Produces high-precision estimates of Camera motion (with ambiguous scale for monocular SLAM) A map of the environment

Limited in application due to lack of a global reference

Stand-Alone Visual SLAM

[1] G. Klein and D. Murray, “Parallel tracking and mapping for small AR workspaces,” in 6th IEEE and ACM International Symposium on Mixed and Augmented Reality. IEEE, 2007, pp. 225–234.

4 of 21

Globally-referenced solution if fiduciary markers are globally-referenced

Requires substantial infrastructure and/or mapping effort Microsoft’s augmented reality maps (TED2010[2])

Visual SLAM with Fiduciary Markers

[2] B. A. y Arcas, “Blaise Aguera y Arcas demos augmented-reality maps,” TED, Feb. 2010, http://www.ted.com/talks/blaise aguera.html.

5 of 21

Can globally-referenced position and attitude (pose) be recovered

from combining visual SLAM and GPS?

6 of 21

No GPS positions Translation Rotation Scale

Observability of Visual SLAM + GPS 1 GPS position

Translation Rotation Scale

2 GPS positions Translation Rotation Scale

~

3 GPS positions Translation Rotation Scale

7 of 21

CDGPS anchors visual SLAM to a global reference frame

Can add an IMU to improve dynamic performance (not required!)

Can be made inexpensive

Requires little infrastructure

Combined Visual SLAM and CDGPS

Very Accurate!

8 of 21

Augmenting a live view of the world with computer-generated sensory input to enhance one’s current perception of reality[3]

Current applications are limited by lack of accurate global pose

Potential uses in Construction Real-Estate Gaming Social Media

Motivating Application: Augmented Reality

[3] Graham, M., Zook, M., and Boulton, A. "Augmented reality in urban places: contested content and the duplicity of code." Transactions of the Institute of British Geographers.

.

9 of 21

Sensors: Camera Two GPS antennas

(reference and mobile) IMU

How can the information from these sensors best be combined to estimate the camera pose and a map of the environment? Real-time operation Computational burden vs. precision

Estimation Architecture Motivation

10 of 21

Sensor Fusion Approach

IMU

Visual SLAM CDGPS

Tighter coupling = higher precision, but increased computational burden

IMU

Visual SLAM CDGPS

IMU

Visual SLAM CDGPS

IMU

Visual SLAM CDGPS

11 of 21

The Optimal Estimator

12 of 21

IMU only for Pose Propagation

13 of 21

Tightly-Coupled Architecture

14 of 21

Loosely-Coupled Architecture

15 of 21

Hybrid Batch/Sequential Estimator Only geographically diverse frames (keyframes) in batch estimator

16 of 21

State Vector:

Measurement Models: CDGPS Positions:

Image Feature Measurements:

Bundle Adjustment State and Measurements

17 of 21

Weighted least-squares cost function Employs robust weight functions to handle outliers

Sparse Levenberg-Marquart algorithm Computational complexity linear in number of point features, but

cubic in number of keyframes

Bundle Adjustment Cost Minimization

18 of 21

Initialize BA based on stand-alone visual SLAM solution and CDGPS positions Determine similarity transform relating coordinate systems

Generalized form of Horn’s transform[4]

Rotation: Rotation that best aligns deviations from mean camera position

Scale: A ratio of metrics describing spread of camera positions

Translation: Difference in mean antenna position

Bundle Adjustment Initialization

[4] B. K. Horn, “Closed-form solution of absolute orientation using unit quaternions,” JOSA A, vol. 4, no. 4, pp. 629–642, 1987.

19 of 21

Simulations investigating estimability included in paper

Hallway Simulation: Measurement errors:

2 cm std for CDGPS 1 pixel std for vision

Keyframes every 0.25 m 242 keyframes 1310 point features

Three scenarios:1. GPS available

2. GPS lost when hallway entered

3. GPS reacquired when hallway exited

Simulation Scenario for BA

A

B

C

D

20 of 21

Simulation Results for BA

21 of 21

Hybrid batch/sequential estimator for loosely-coupled visual SLAM and CDGPS with IMU for state propagation Compared to optimal estimator

Outlined algorithm for BA (batch)

Presented a novel technique for initialization of BA

BA simulations Demonstrated positioning accuracy of cm and attitude accuracy of

in areas of GPS availability

Attained slow drift during GPS unavailability (0.4% drift over 50 m)

Summary

22 of 21

State Vector:

Propagation Step: Standard EKF propagation step using accelerometer and gyro

measurements

Accelerometer and gyro biases modeled as a first-order Gauss-Markov processes

More information in paper …

Navigation Filter

23 of 21

Measurement Update Step: Image feature measurements from all non-keyframes

Temporarily augment the state with point feature positions Prior from map produced by BA Must ignore cross-covariances filter inconsistency

Similar block diagonal structure in the normal equations as BA

Navigation Filter (cont.)

24 of 21

Simulation Results for BA (cont.)

top related