vision based navigation in riverine environements

Post on 16-Apr-2015

1.692 Views

Category:

Documents

2 Downloads

Preview:

Click to see full reader

DESCRIPTION

Recent research progress in vision-based navigation (ONR project) University of Illinois at Urbana-Champaign (PIs: S.-J. Chung and S. Hutchinson)

TRANSCRIPT

Vision based Navigation in Riverine Environments

Supported by Coastal Geosciences, Office of Naval Research

PI: Soon-Jo Chung

Co-PI: Seth Hutchinson

University of Illinois at Urbana Champaign, Urbana, IL

January 25, 2013

2

Outline

Finished/on-going Current Work till date

◦ Visual navigation in riverine environments using hybrid

observer

◦ Use of higher level structures in riverine environments

◦ New nonlinear estimator for stochastic systems

◦ Vision-based path planning for agile flight

◦ UAS platform and image-based tracking

Future Work

◦ Finish the path planning and image-tracking algorithms

◦ Experimental validate the algorithms on UAS platform

Publication / Presentation Plan

Concluding Remarks

References

Vision-based Navigation

• J. Yang, D. Rao, S.-J. Chung, and S. Hutchinson, “Monocular Vision based Navigation in GPS Denied Riverine Environments,”

AIAA Infotech at Aerospace Conference, St. Louis, MO, Mar. 2011, AIAA-2011-1403.

4

Objective

Estimate a 3D point cloud map

◦ Use feature image points and their reflection on the river

◦ Overcome the drawback of the inverse-depth parameteri

zation

Generate the trajectory of the UAS

Navigate a UAS inside a riverine environment

Location for navigation experiments

(Crystal Lake, Urbana, IL)

River border marked with red and

reflections shown on river surface

5

Feature Extraction & Depth Perception

Monocular Vision based SLAM

◦ Steps for Navigation in Riverine Environments

• Measure MAV attitude by using epipolar geometry

• Extract coplanar features around the river surface

• Measure landmark range and bearing

• Navigate MAV with FastSLAM algorithm

Approach

6

Epipolar Geometry

Epipolar Geometry

◦ Projective geometry from different camera views

◦ Independent of scene structure

Fundamental Matrix

◦ Algebraic representation of epipolar geometry

◦ Maps a point in an image plane to a line in another

image plane

◦ Can be computed from correspondences of image points

Approach

7

Fundamental Matrix

Feature Correspondence

◦ SURF algorithm

• Fast and robust method to find correspondences

between different view of a scene

◦ Compute the fundamental matrix

Previous match and current match

Approach

8

Forward Translation

Epipolar Geometry

◦ Epipole can be found from the relationship with the

fundamental matrix

◦ Focus of Expansion (FOE)

• Epipole is called FOE in pure translational motion

Attitude Initialization

◦ Initialize attitude during forward motion from FOE

Approach

9

Essential Matrix

Essential Matrix

◦ Specialization of the fundamental matrix

◦ Image point in normalized coordinates

◦ Relation with the fundamental matrix

Singular Value Decomposition

◦ Factor the essential matrix into a skew symmetric matrix

and a rotation matrix

◦ Derive from SVD of the essential matrix

◦ Twisted pair - rotation through 180 degrees about the

line that connects the camera centers

Approach

10

Landmark Ranging

Landmark Extraction

◦ River surface generally has a consistent altitude

◦ Coplanar features surround the river surface

Range and Bearing Measurement

◦ MAV attitude is measured with epipolar geometry

◦ Landmark ranging through coordinate transformation

Approach

11

River Segmentation

Morphological Segmentation

◦ Locate dominant edges and relatively uniform surfaces

◦ Objective is to find a segmentation line to extract the river

◦ Compute gradient norm of a gray scale intensity image

• Form range and basin from the image

Range and Catchment Basin

◦ Range - High ridges corresponding to edges in an image

◦ Basin - Uniform regions of low points with less texture

(a) Intensity image (b) Gradient norm

Approach

12

Segmentation Algorithm

Image Immersion

◦ Marker points are specified in the top and bottom of the

image to select the river

◦ Basins are “flooded” starting from marker points

◦ Regions that merge across the marker belong together

Segmentation

◦ Segment the image into corresponding marked regions

◦ Marked regions own the ranges in the gradient image

if they are connected with the segment

Inte

ns

ity

Distance

Inte

ns

ity

Distance

Inte

ns

ity

Distance

Distance

Inte

ns

ity

Approach

13

Landmark Extraction

River Segmentation

◦ Image of the river is segmented into two regions

◦ River surface and its surroundings

Feature Detection

◦ Compute eigenvalues of 2nd order derivative images

◦ Search for points that have strong textures

Landmark Extraction

◦ Include features on the river segment as landmarks

◦ The landmarks are the map features for FastSLAM

Approach

14

Depth Perception

Image Frame to Camera Frame

◦ Consider the planarity of feature locations

◦ Enable immediate landmark initialization

◦ Relationship between pixel coordinate frame and camera

frame can be derived

Approach

15

Depth Perception (cont’d)

Camera Frame to Primary Frame

◦ Primary frame is determined from the heading direction

during attitude initialization

Relationship between camera frame and primary frame

can be derived from the attitude measurement

Approach

16

Depth Perception (cont’d)

Longitudinal and Transversal Distance

◦ Distance can be derived with the additional altitude

information

Approach

17

Experiments

Experiment Environment

◦ Boneyard creek

• Experiments were conducted in the creek at the Univ

ersity of Illinois at Urbana Champaign

◦ River-Like Environment

• There are coplanar features around the water

surface of the creek

• Able to demonstrate our proposed method in an

environment with no orthogonal structure

18

Results

SLAM Results

◦ Initial camera attitude was determined from the FOE

◦ Rotation in later frames was determined relative to this

orientation

19

Discussions

Mapping Landmarks

◦ Landmarks are extracted around the water surface and

are shown in the map

◦ Produced map illustrates the outline of the creek

◦ Some points are from the reflections on the water surface

as well as the surroundings

20

Discussions (cont’d)

Vehicle Motion Model

◦ Projection of the MAV to the ground is used

Feature Measurement Model

◦ Linearization error can be prevented by directly using

measurements in Cartesian co-ordinates

21

Results

SLAM at the UIUC Engineering Quad

◦ Utilize the structural commonalities of diverse environments

◦ Estimate a path of an MAV

22

Recent Results (cont’d)

SLAM at the UIUC Engineering Quad

◦ Compensate angular drift with FOE

◦ Track map features for consistent measurement

23

Results (cont’d)

Overlay of Mapping Results with MAV Trajectory

Attitude measurement from vision

UIUC Engineering Quad

24

Indoor Results

SLAM at the UIUC Beckman Institute

◦ Algorithm works in diverse range of environments that

has a path with planar surface

25

Indoor Results (cont’d)

Results from Indoor Environments

Beckman Institute

Attitude measurement from vision

26

Conclusions

Developed experimental platform using monocular camera

and altimeter to compute the map in indoor and outdoor

environments

Demonstrated the results of map construction and vehicle

localization using the helicopter hardware platform

Vision-based Navigation Method based

on Hybrid Observer

• J. Yang, A. Dani, S.-J. Chung, and S. Hutchinson, “Vision-Based Navigation of UAS in Riverine Environment,” AIAA Guidance,

Navigation and Control, to be submitted.

• J. Yang, A. Dani, S.-J. Chung, and S. Hutchinson, “Vision-Based Navigation of UAS in Riverine Environment,” International Jou

rnal of Robotics Research, in preparation.

28

Method based on Hybrid Observer

Minimize the number of sensors to save

power and payload

◦ IMU and an altimeter already available aboard the

UAS for flight control

◦ Light weight monocular camera

◦ Design an estimator instead of using range sensors

riverbank

reflection

altitude

observations

MAV

Approach

29

Closed-Loop Framework

Localization and Mapping with proposed met

hods

Navigation of a UAS along the river based on

the estimated results

Approach

30

Pseudo-Measurements

Exploit reflection of landmarks on river sur

face

sequence of camera location

virtual camera

reflection of a feature

feature point

virtual feature

observation of a virtual point

observation from a virtual camera

reflection transformation

vector normal to river surface

Approach

31

Novel Hybrid Estimator Design

Hybrid observer for mapping

UAS trajectory estimation

Approach

32

Simulation Results

Localization and mapping results

- true UAS trajectory

- estimated trajectory

* true landmark location

* virtual point location

* estimated landmark

Results

33

Conclusions

Can navigate with fewer landmark structures (smaller state

space)

Can operate in environment with few or non-unique point

features

Can produce more structured maps of the environment,

especially when point features aren’t meaningful landmarks

◦ Could provide useful cues for planning / control

More experimentation in different environments needed

Higher Level Structures for SLAM

• D. Rao, S.-J. Chung, and S. Hutchinson, “CurveSLAM: An Approach for Vision-based Navigation without Point Features,” IE

EE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vilamoura, Algarve, Portugal, October 7-12, 2012

• D. Rao, S.-J. Chung, and S. Hutchinson, “CurveSLAM: An Approach for Vision-based,” International Journal of Robotics Res

earch, to be submitted

35

Objective

Develop a novel curve-based algorithm to perform SLAM

utilizing only the path edge curves from stereo data.

Benefits of curve-based SLAM:

◦ Can represent more structure in the environment

◦ Much smaller state space and uncluttered map

◦ More useful semantic information for planning and control

Motivation

Can we perform SLAM in these environments purely by

exploiting the path / river edge structure?

36

Overview

Approach

Observing a planar world curve in two different

images, we can determine the curve parameters

and the plane orientation.

Can eradicate stereo matching of points; instead

use a model fit to find the curve parameters to

minimize reprojection error.

37

Curve Parametrization

We utilize planar cubic Bezier

curves, defined by 4 control

points, with t in [0, 1 ]

Affine transformation on the curve is the same as transforming

the control points

◦ Projected curve in image is approximately equivalent to projection

of control points.

Can project each control point to the image using the

stereo projection equations:

Approach

39

SLAM

State / process model, process noise and .

Observations of out-of-plane pose and curve control points:

EKF-based SLAM

Curve correspondence? Need to find t values and split curves

Approach

40

Data Association

Curve splitting

◦ Using De Casteljau’s algorithm, control

points of split curve are a linear

transformation of the original

Curve correspondence ◦ Track end points of map curves in images

Approach

41

Vision Results

Results

Stereo vision data on various paths of length up to 100m.

SLAM estimate based purely on path edge curves.

Algorithm can also recover from a series of poor curve

measurements (below).

42

Simulation (Consistency) Results

Results

Simulated two loops of the three environments shown (total

lengths of 160m, 250m, and 400m).

Normalized Estimation Error Squared (NEES) used as a

measure of filter consistency (95% Confidence Interval).

43

Simulation (Consistency) Results

Results

NEES plots are over 50 Monte Carlo runs; 95% CI shown in red

Improvement in consistency over previous work.

44

Conclusions

Can navigate with fewer landmark structures (smaller state

space)

Can operate in environment with few or non-unique point

features

Can produce more structured maps of the environment,

especially when point features aren’t meaningful landmarks

◦ Could provide useful cues for planning / control

More experimentation in different environments needed

Nonlinear Estimator

• A. P. Dani, S.-J. Chung, and S. Hutchinson, “Observer Design for Stochastic Nonlinear Systems via Contraction-

based Incremental Stability,” IEEE Transactions on Automatic Control, under review, 2012

• A. P. Dani, S.-J. Chung, and S. Hutchinson, “Observer Design for Stochastic Nonlinear Systems using Contraction

Analysis,” Proc. IEEE Conference on Decision and Control (CDC), Maui, HI, December 2012

47

Observer/Estimator

How to find P, Solve the inequality using tools such as cvx:

Approach

49

Flow Chart

SDRE Filter 1

(Propagate and Update)

Sensor Measurement

SDRE Filter 2

(Propagate and Update)

SDRE Filter n

(Propagate and Update)

Approach

53

Simulation (Consistency) Results

Result

54

Simulation Results Comparison

54

Result

Motion Planning

• A. Paranjape, S.-J. Chung, S. Hutchinson, “Optimum Spatially Constrained Turns for MAVs”, AIAA Guidance, Navigation an

d Control, in preparation

• A. Paranjape, S.-J. Chung, S. Hutchinson, “Optimum Spatially Constrained Turns for MAVs”, International Journal of Roboti

cs Research, in preparation

57

Problem Statement

Forest: a dense, unstructured, unknown obstacle field

Challenge: Agile flight needs quick visual sensing

and estimation

Challenge: aircraft has a minimum flight speed and mi

nimum turn rate

◦ Restricts the set of possible paths

◦ May occasionally require “turn around”

A dyadic motion planning algorithm

◦ Motion planner for high speed forward flight; if unable to find

solution, then

◦ Aggressive, spatially constrained 180 degree turns

Problem

58

Spatially Constrained Turns

Problem statement: change the heading by

180 degrees inside the minimum possible

volume

Equations of motion written in χ (heading

angle) domain instead of time

◦ Converts the problem into a fixed boundary problem

◦ A more intuitive feedback law can be designed

Approach

59

Features of the Turn

Should not rely on rapidly changing control inputs

◦ Sensitivity to delays

◦ Potential for instability

Preferably be a motion primitive

◦ Primitive can be mapped to the spatial constraint

◦ Again, can yield an intuitive feedback

Control inputs for guidance

◦ Angle of attack (α): flight path angle and speed

◦ Wind axis roll angle (μ): heading change

◦ Thrust (T): speed and flight path angle

Approach

60

Designing the Turn

Intuition

◦ Large angle of attack permits rapid turn rate and rate of

pull-up, depending on the roll angle; however, a large

alpha can cause rapid deceleration

◦ Large wind axis roll angle permits a large turn rate

◦ Large thrust helps climb rapidly or prevents loss in altitude,

and aids speed recovery

Use a climbing/descending turn

◦ Permits rapid heading change without the need for a very

large lift (and hence drag)

Problem statement: choose constant values for the

control inputs (T, α, μ) which yield the least value of

the cost function

Choose weights to match the spatial constraints

2 2 2

x y zJ q qx y zq

Approach

61

Simulation Results

- The optimum angle of attack is almost constant and

very close to the maximum chosen limit of 0.6 radians

- Bank angle large when constraint on z is tight,

as expected

- Large thrust when constraint on y is tight, but not the

maximum achievable limit (can cause a large climb)

- Constraint on x: multiple solutions

Result

62

Simulation Results

Multiple solutions for weight

Combination (5, 1, 1)

Spatially constrained turns

Left: (1, 1, 5), Right: (1, 5, 1)

Result

63

Conclusions

Developed a motion planning algorithm for agile motion of

unmanned aerial system in forest-type environments

Demonstrated preliminary results in simulations

Work under progress to improve the algorithm and do

experimental validation

Image-based Tracking

• A. P. Dani, S.-J. Chung, S. Hutchinson, “Moving Object Feature Tracking for an Airborne Moving Camera using IMU and Altitude

Sensor”, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), in preparation

• A. P. Dani, S.-J. Chung, S. Hutchinson, “Moving Object Feature Tracking for an Airborne Moving Camera using IMU and Altitude

Sensor”, International Journal of Robotics Research, in preparation

65

Overview

1. M. Hwangbo, J.-S. Kim, and T. Kanade, “Gyro-aided feature tracking for a moving camera: fusion, auto-

calibration and GPU implementation,” Int. J. Robot. Res., vol. 30, no. 14, pp. 1755–1774, 2011.

2. Z. Kalal, K. Mikolajczyk, and J. Matas, “Tracking-Learning-Detection,” IEEE Trans. Pattern Anal. Mach.

Intell., vol. 34, no. 7, pp. 1409–1422, 2012.

67

Vision-based Tracking

Outline of the preliminary

version of the algorithm

68

Conclusions

Developing a vision-based robust feature/object tracking

algorithm for tracking features from images involving rapid

rotation and translation motions due to agile motion of the

UAS

Incorporating a feature position predictor to improve feature

tracking

Future work involves finalizing the algorithm and testing it on

a hardware platform

Unmanned Aerial System Platforms

70

UAS Platform 1

X-8 UAS Platform Specifications

71

UAS Platform 1

FPV Camera CCD RGB sensor

Ardupilot

RC Receiver

72

UAS Platform 2

On-board Capabilities:

- Autopilot (Ardupilot)

- Inertial Measurement Unit

- 3-axis Magnetometer

- Ultrasound altimeter sensor

- FPV CCD Camera

- GPS (for measuring ground truth data)

75

Flight Result

Autonomous Flight Mission

76

Conclusions

Developing a fixed-wing unmanned aerial platform with on-

board autopilot, IMU, GPS sensors, ground control station

and communication channel

Developed an unmanned helicopter with on-board autopilot,

IMU, GPS, magnetometer sensors

Developing a vision-based object tracking algorithm

Future Work

78

Planned Work 1. Motion Planning

Formal optimization of equations of motion for

spatially constrained turns

◦ Can use results from previous slides for sanity check

◦ Intuition: expect to obtain closely matching solution

Motion planning for forward flight

Inverse design: designing an aircraft based on criteria

derived from motion planning

2. Feature Tracking

Use novel features or their combination to do an

opportunistic and robust feature tracking

79

Experimental Validation

• A 400 size UAS helicopter, and a fixed-wing UAS

• Autopilot system: ArduPilot, fully integrated with 3-axis

gyros/accelerometers, GPS, an ultrasonic altimeter

• Image processing & real-time navigation: 1GHz x86

architecture CPU with SIMD instructions, 1GB DDR2

533MHz RAM, 4GB SSD , Linux kernel

80

Publication / Presentation Plan J. Yang, D. Rao, S.-J. Chung, and S. Hutchinson,

“Monocular Vision based Navigation in GPS Denied Riverine

Environments,” AIAA Infotech at Aerospace Conference,

St. Louis, MO, Mar. 2011, AIAA-2011-1403.

D. Rao, S.-J. Chung, and S. Hutchinson, “CurveSLAM: An Approach

for Vision-based Navigation without Point Features,” IEEE/RSJ Intern

ational Conference on Intelligent Robots and Systems (IROS), Vilamo

ura, Algarve, Portugal, October 7-12, 2012

A. P. Dani, S.-J. Chung, and S. Hutchinson, “Observer Design for Sto

chastic Nonlinear Systems using Contraction Analysis,” Proc. IEEE C

onference on Decision and Control (CDC), Maui, HI, December 2012,

to appear

A. P. Dani, S.-J. Chung, and S. Hutchinson, “Observer Design for

Stochastic Nonlinear Systems via Contraction-based Incremental Sta

bility,” IEEE Transactions on Automatic Control, under review, 2012.

J. Yang, A. Dani, S.-J. Chung, and S. Hutchinson, “Vision-Based Navi

gation of UAS in Riverine Environment,” AIAA Guidance, Navigation a

nd Control, to be submitted.

81

Publication / Presentation Plan

J. Yang, A. Dani, S.-J. Chung, and S. Hutchinson, “Vision-Based

Navigation of UAS in Riverine Environment,” International Journal of

Robotics Research, in preparation

D. Rao, S.-J. Chung, and S. Hutchinson, “CurveSLAM: An Approach for

Vision-based Navigation without Point Features,” International Journal of

Robotics Research, in preparation.

A. Paranjape, S.-J. Chung, S. Hutchinson, “Optimum Spatially Constrained

Turns for MAVs”, AIAA Guidance, Navigation and Control, in preparation

A. Paranjape, S.-J. Chung, S. Hutchinson, “Optimum Spatially Constrained

Turns for MAVs”, International Journal of Robotics Research, in preparation

A. P. Dani, S.-J. Chung, S. Hutchinson, “Moving Object Feature Tracking for

an Airborne Moving Camera using IMU and Altitude Sensor”, IEEE/RSJ

International Conference on Intelligent Robots and Systems (IROS), in

preparation

A. P. Dani, S.-J. Chung, S. Hutchinson, “Moving Object Feature Tracking

for an Airborne Moving Camera using IMU and Altitude Sensor”,

International Journal of Robotics Research, in preparation

82

Concluding Remarks

Developed a novel Curve-based SLAM method

Developed a new state estimation method

Developing a hybrid observer-based navigation algorithm

for UAS in riverine environment

Developing a motion planning algorithm for agile flight in

the riverine environment

Developing a vision-based feature tracking algorithm

Our results will play a key role in enhancing the Navy's int

elligence, surveillance, and reconnaissance missions held

at GPS-denied riverine environments.

83

References Simultaneous Localization and Mapping

◦ J. Sola, T. Vidal-Calleja, J. Civera, and J. M. M. Montiel, “Impact of

landmark parametrization on monocular ekf-slam with points and lines,”

International Journal of Computer Vision, vol. 97, no. 3, pp. 339–368,

2012.

◦ J. Civera, A. Davison, and J. Montiel, “Inverse depth parametrization for

monocular SLAM,” IEEE Transactions on Robotics, vol. 24, no. 5, pp. 932

– 945, 2008.

◦ M. Parsley and S. Julier, “Avoiding negative depth in inverse depth

bearing-only slam,” in Intelligent Robots and Systems, 2008. IROS 2008.

IEEE/RSJ International Conference on, Sept. 2008, pp. 2066 –2071.

◦ S. Tully, H. Moon, G. Kantor, and H. Choset, “Iterated filters for bearing-

only slam,” in Robotics and Automation, 2008. ICRA 2008. IEEE

International Conference on, May 2008, pp. 1442 –1448.

Structure and Motion Estimation ◦ A. Dani, N. Fischer, and W. Dixon, “Single camera structure and motion,”

Automatic Control, IEEE Transactions on, vol. 57, no. 1, pp. 238 –243,

Jan. 2012.

◦ A. Heyden and O. Dahl, “Provably convergent structure and motion

estimation for perspective systems,” in Decision and Control, 2009 held

jointly with the 2009 28th Chinese Control Conference. CDC/CCC 2009.

Proceedings of the 48th IEEE Conference on, Dec. 2009, pp. 4216 –4221.

84

References (cont’d) Riverine Navigation

◦ S. Rathinam, P. Almeida, Z. Kim, S. Jackson, A. Tinka, W. Grossman, and

R. Sengupta, “Autonomous searching and tracking of a river using an uav,”

in American Control Conference, 2007. ACC ’07, july 2007, pp. 359 –364.

◦ J. C. Leedekerken, M. F. Fallon, and J. J. Leonard, “Mapping complex

marine environments with autonomous surface craft,” in Intl. Sym. on

Experimental Robotics (ISER), Delhi, India, Dec. 2010.

◦ J. Yang, D. Rao, S. J. Chung, and S. Hutchinson, “Monocular vision

based navigation in GPS denied riverine environments,” in AIAA Infotech

at Aerospace Conference, AIAA-2011-1403, St. Louis, MO, 2011.

◦ S. Scherer, J. Rehder, S. Achar, H. Cover, A. Chambers, S. Nuske, and S.

Singh, “River mapping from a flying robot: state estimation, river detection,

and obstacle mapping,” Auton. Robots, vol. 33, no. 1-2, pp. 189–214, Aug.

2012.

Contraction Theory ◦ W. Lohmiller and J. J. E. Slotine, “On contraction analysis for non-linear

systems,” Automatica, vol. 34, no. 6, pp. 683 – 696, 1998.

◦ W. Lohmiller and J. J. E. Slotine, “Nonlinear process control using

contraction theory,” AIChE J., vol. 46, pp. 588 –596, 2000.

85

References (cont’d) Curve-Based SLAM

◦ M. H. An, and C. N. Lee, “Stereo Vision Based on Algebraic Curves”, in IEEE

International Conference on Pattern Recognition (ICPR), 1996.

◦ L. Piegl and W. Tiller, The NURBS Book’, Springer-Verlag, New York,

◦ NY, USA, 1997.

◦ L. Pedraza, G. Dissanayake, J. V. Miro, D. Rodriguez-Losada and F. Matia,

“BS-SLAM: Shaping the World”, Proc. of Robotics: Science and Systems, 2007.

◦ A. Huang and S. Teller, “Probabilistic Lane Estimation using Basis Curves”,

Proc. of Robotics: Science and Systems, 2010.

◦ J.Y. Kaminski and A. Shashua, “Multiple View Geometry of General Algebraic

Curves”, International Journal of Computer Vision (IJCV), 56(3), 2004.

◦ J.Y. Kaminski and A. Shashua, “On calibration and reconstruction from Planar

Curves”, Lecture Notes in Computer Science, 1842/2000, pp. 678 – 694, 2000.

◦ B. F., Buxton, H. Buxton, “Computation of Optic Flow from the Motion of Edge

Features in Image Sequences”, Image and Vision Computing, 2(2), 1984.

◦ J-P. Gambotto, “A new approach to Combining region growing and Edge

detection”, Pattern Recognition Letters, 14(11), pp 869 – 875, 1993.

Image-based Tracking

◦ M. Hwangbo, J.-S. Kim, and T. Kanade, “Gyro-aided feature tracking for a

moving camera: fusion, auto-calibration and GPU implementation,” Int. J. Robot.

Res., vol. 30, no. 14, pp. 1755–1774, 2011.

◦ Z. Kalal, K. Mikolajczyk, and J. Matas, “Tracking-Learning-Detection,” IEEE

Trans. Pattern Anal. Mach. Intell., vol. 34, no. 7, pp. 1409–1422, 2012.

top related