auv/rov pose and shape estimation of tethered...

8
AUV/ROV POSE AND SHAPE ESTIMATION OF TETHERED TARGETS WITHOUT FIDUCIALS Sean Augenstein 1 [email protected] Stephen Rock 1,2 [email protected] 1 Dept. of Aeronautics and Astronautics Stanford University 496 Lomita Mall Stanford, CA 94305 2 Monterey Bay Aquarium Research Institute 7700 Sandholdt Road Moss Landing, CA 95039 Abstract This paper presents an algorithm (SPEAR) for vi- sually estimating the pose and shape of an underwa- ter target without fiducials. The goal is to enable AUV/ROV stationkeeping with respect to targets about which nothing is known a priori. It will also enable relative positioning with respect to any face on the 3-D body. It uses a particle filter to simulta- neously track the pose and reconstruct a 3-D point cloud model of the target, based on concepts from the SLAM community (in particular, the FastSLAM algorithm [1], [2]). The algorithm for target estimation is explained and applied to the case of monocular sensing. Re- sults are given showing successful estimation of an underwater target’s pose along with the 3-D loca- tions of features on the body, based on experiments conducted in Monterey Bay, California with the ROV Ventana and a tethered underwater target. 1 Introduction This paper presents a novel method for visual esti- mation of an unknown underwater object (the ‘tar- get’), relative to an AUV/ROV. The algorithm con- tinually estimates relative pose to handle rotating and translating targets (an issue since tethered tar- gets can move unpredictably due to ocean currents). In addition, the algorithm forms and updates an estimate of the 3-D shape of the body (i.e., ‘re- construction’), so tracking/proximity operations are not constrained by a single view of the target. This research is motivated by the need for AUV/ROV stationkeeping on objects with no fidu- cials, or in situations where existing markers have been obscured (for example, by biogrowth). Beyond fiducial-less stationkeeping, a higher-level goal is the ability to command a vehicle to hold station with respect to any particular face on a 3-D target, and slew between distinct faces, as opposed to being lim- ited to one surface. For this reason, online 3-D re- construction takes place concurrently with relative pose tracking. The algorithm presented in this paper is inspired by algorithms from the Simultaneous Localization and Mapping (SLAM) field. For this reason, the algorithm is referred to as SLAM-inspired Pose Es- timation and Reconstruction (‘SPEAR’) [3]. The SPEAR framework is broad and of use with any vision sensing configuration (i.e, single or multi- ple cameras). It can be used with either stereo or monocular vision systems. Stereo provides depth data, but involves more complicated image cor- respondence (matching features between the cam- era pair as well as matching features over time). Monocular vision is simpler, but estimation tech- niques must overcome the lack of range data in a single measurement. The choice was made in this research to focus on monocular sensing. Figure 1 illustrates the sensing of a tethered target. In place of fiducials, the work presented here uses natural image features. While the algorithm is in- dependent of the particular choice of image point features used, SIFT [4] was used due to its in- variance properties (which allow for feature track- ing/matching through a variety of transformations). SIFT has also been applied previously to robotic ap-

Upload: others

Post on 31-Jul-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: AUV/ROV POSE AND SHAPE ESTIMATION OF TETHERED …web.stanford.edu/group/arl/cgi-bin/drupal/sites... · ple cameras). It can be used with either stereo or monocular vision systems

AUV/ROV POSE AND SHAPE ESTIMATION OF TETHEREDTARGETS WITHOUT FIDUCIALS

Sean Augenstein1

[email protected] Rock1,2

[email protected]

1 Dept. of Aeronautics and AstronauticsStanford University

496 Lomita MallStanford, CA 94305

2 Monterey Bay Aquarium Research Institute7700 Sandholdt Road

Moss Landing, CA 95039

Abstract

This paper presents an algorithm (SPEAR) for vi-sually estimating the pose and shape of an underwa-ter target without fiducials. The goal is to enableAUV/ROV stationkeeping with respect to targetsabout which nothing is known a priori. It will alsoenable relative positioning with respect to any faceon the 3-D body. It uses a particle filter to simulta-neously track the pose and reconstruct a 3-D pointcloud model of the target, based on concepts fromthe SLAM community (in particular, the FastSLAMalgorithm [1], [2]).

The algorithm for target estimation is explainedand applied to the case of monocular sensing. Re-sults are given showing successful estimation of anunderwater target’s pose along with the 3-D loca-tions of features on the body, based on experimentsconducted in Monterey Bay, California with theROV Ventana and a tethered underwater target.

1 Introduction

This paper presents a novel method for visual esti-mation of an unknown underwater object (the ‘tar-get’), relative to an AUV/ROV. The algorithm con-tinually estimates relative pose to handle rotatingand translating targets (an issue since tethered tar-gets can move unpredictably due to ocean currents).In addition, the algorithm forms and updates anestimate of the 3-D shape of the body (i.e., ‘re-construction’), so tracking/proximity operations arenot constrained by a single view of the target.

This research is motivated by the need for

AUV/ROV stationkeeping on objects with no fidu-cials, or in situations where existing markers havebeen obscured (for example, by biogrowth). Beyondfiducial-less stationkeeping, a higher-level goal is theability to command a vehicle to hold station withrespect to any particular face on a 3-D target, andslew between distinct faces, as opposed to being lim-ited to one surface. For this reason, online 3-D re-construction takes place concurrently with relativepose tracking.

The algorithm presented in this paper is inspiredby algorithms from the Simultaneous Localizationand Mapping (SLAM) field. For this reason, thealgorithm is referred to as SLAM-inspired Pose Es-timation and Reconstruction (‘SPEAR’) [3]. TheSPEAR framework is broad and of use with anyvision sensing configuration (i.e, single or multi-ple cameras). It can be used with either stereo ormonocular vision systems. Stereo provides depthdata, but involves more complicated image cor-respondence (matching features between the cam-era pair as well as matching features over time).Monocular vision is simpler, but estimation tech-niques must overcome the lack of range data in asingle measurement. The choice was made in thisresearch to focus on monocular sensing. Figure 1illustrates the sensing of a tethered target.

In place of fiducials, the work presented here usesnatural image features. While the algorithm is in-dependent of the particular choice of image pointfeatures used, SIFT [4] was used due to its in-variance properties (which allow for feature track-ing/matching through a variety of transformations).SIFT has also been applied previously to robotic ap-

Page 2: AUV/ROV POSE AND SHAPE ESTIMATION OF TETHERED …web.stanford.edu/group/arl/cgi-bin/drupal/sites... · ple cameras). It can be used with either stereo or monocular vision systems

Figure 1: AUV/ROV Imaging an U/W Target

plications in the underwater environment, includingSLAM [5] and AUV navigation [6].

2 Background

Much previous vision-based AUV/ROV relative po-sitioning research has relied on known fiducials onthe object [7,8]. This necessarily constrains station-keeping to specific targets. Recently an alterna-tive algorithm has been presented [9] in which auser-selected group of natural SIFT features act asthe reference marker. It is intended for ROV tele-operation about stationary targets. These methodsuse for reference a single face of the target, whichmust remain in view of the camera to determine rel-ative pose.

For broader capability, i.e., the ability to hold sta-tion with respect to any face, or slew from one faceto another, some information on the shape is neces-sary. This could come in the form of a CAD model;however, this constrains station-keeping capabilityto objects for which a priori information is avail-able. Alternatively, the 3-D shape of the target canbe visually reconstructed. As the targets are rotat-ing and translating on tethers in the midwater, thisreconstruction takes place concurrently with (andthus factors in) the relative pose estimation.

The problem of bearings-only reconstruction ofan object up to a scale factor, along with the rela-tive poses of the camera, is addressed in the fieldof Structure-from-Motion (SFM). However, muchof the SFM research focuses on offline, batch pro-cessing, to find an optimal solution for the shapeand poses based on all the images acquired [10].While the solution obtained is accurate, an offlinebatch algorithm is of no use in a real-time estima-

tion/control loop.There are also SFM algorithms that recursively

online process each new image, and might be appliedin a real-time situation. The dominant approach toperforming recursive SFM has been to use extendedKalman filters (EKFs) [11–13]. However, the selec-tion of an EKF makes a constraining assumptionon the probability distribution of the states of thefilter, namely, that they are Gaussian-like (i.e., uni-modal). In addition, many of these methods havevarious computational drawbacks, such as requiringinversion of large matrices and/or requiring a batchprocess for initialization.

More recently, research has been done on particlefilter-based SFM, which relaxes the assumption of aGaussian posterior. Previous particle filter methodseither use fiducials for state initialization [14] andperiodic state correction [15], or are motivated byissues of unknown feature correspondence betweenimages [16] and have not run in real-time.

The SPEAR algorithm presented here is a SLAM-inspired, efficient particle-filter approach to solvingthe problem of real-time recursive reconstructionand pose estimation without fiducials or other a pri-ori information. Specifically, the approach is basedon the FastSLAM algorithm [1,2,17]. It uses a typeof particle filter (known as a Rao-Blackwellized Par-ticle Filter [18]) which efficiently partitions the statespace, and appears not to have been used previouslyin the SFM field.

SPEAR takes advantage of the strong structuralsimiliarites between SLAM and the pose estimationand reconstruction task. SLAM refers to the taskof determining own-vehicle location/pose along witha map of the vehicle’s surroundings [19]. WhereSLAM algorithms are concerned with processingmeasurements to form a map of the external world,SPEAR is concerned with processing measurementsto form a 3-D ‘map’ of a distinct external target.Where SLAM algorithms are concerned with thekinematic states of the own-robot making the mea-surements, SPEAR is concerned with the kinematicstates of the external target.

3 Problem Definition

SPEAR entails two concurrent estimation problems:pose estimation, to determine the kinematic states s(which evolve over time), and reconstruction, to de-termine the 3-D location of the features in the bodyframe, X (constants, since a rigid body is assumed).These are discussed in turn. For notation, the dis-crete time steps are indexed by k, and the features

Page 3: AUV/ROV POSE AND SHAPE ESTIMATION OF TETHERED …web.stanford.edu/group/arl/cgi-bin/drupal/sites... · ple cameras). It can be used with either stereo or monocular vision systems

are distinguished by index j.

3.1 Pose Estimation

Pose estimation refers to the filtering of the target’skinematic state s as it evolves over time. Explicitly,the kinematic state vector at time k is:

sk = [ φ θ ψ ωxR ωyR ωzR xt yt zt xt yt zt ]Tk (1)

The Euler angles φ, θ, and ψ give the rotationto the body frame from the reference frame; ωxR ,ωyR , and ωzR form the target’s angular rate vector,expressed in the reference frame; xt, yt, zt, xt, yt, ztare the position and velocity of the target’s centerof rotation in the reference frame. The referenceframe can be either the camera’s frame (in whichcase the position is in non-dimensional coordinates),or an inertial frame if position/orientation sensorsare available on the observing AUV/ROV.

The kinematic state’s time evolution is modeledby the following kinematic motion update equation:

sk = f(sk−1) + np

φθψωxRωyRωzRxtytztxtytzt

k

=

φ(k−1) + ∆t ∗ φ(k−1)

θ(k−1) + ∆t ∗ θ(k−1)

ψ(k−1) + ∆t ∗ ψ(k−1)

ωxR(k−1)

ωyR(k−1)

ωzR(k−1)

xt(k−1) + ∆t ∗ xt(k−1)

yt(k−1) + ∆t ∗ yt(k−1)

zt(k−1) + ∆t ∗ zt(k−1)

xt(k−1)

yt(k−1)

zt(k−1)

+ np

(2)

where

θψ

]k

=

[1 tan(θk)sin(φk) tan(θk)cos(φk)0 cos(φk) −sin(φk)

0sin(φk)cos(θk)

cos(φk)cos(θk)

]RB/R

[ωxR(k)ωyR(k)ωzR(k)

](3)

In Eq. 2, np is additive noise, with Gaussiandistribution N (0, P[4:12,4:12]) on the rates, position,and velocity components of sk. A Fisher distribu-tion (F(µ, κ)), a probability model for directions inspace [20], is used as the basis for applying noise onthe Euler angle components of sk. This ensures per-turbations to the target’s 3-D orientation are identi-cally distributed over the entire range of Euler angleparameters (further discussed in [3]).

In Eq. 3, RB/R is the rotation matrix which ro-

tates vectors from the reference frame R into thebody frame B.

3.2 Reconstruction

SPEAR also reconstructs the target’s 3-D shape;this is represented as a point cloud, with the pointsbeing the locations where the observed SIFT fea-tures sit on the target. A key assumption is thatthe target is a rigid body, so that the feature loca-tions are fixed with respect to each other.

The matrix of feature locations on the body is:

X =[X1 X2 · · · Xj Xj+1 · · ·

]X =

[x1y1z1

x2y2z2· · ·

xjyjzj

xj+1yj+1zj+1

· · ·] (4)

This matrix grows as new features are initialized.As the object is observed by a calibrated camera,

SIFT features are extracted from the images. Thesemeasurements (z) are in pixel coordinates (u, v). Apinhole model is assumed:

zj = g(s,Xj) + nc[ujvj

]= foc

zCj/cam

[xy

]Cj/cam

+ nc(5)

In Eq. 5, nc is additive Gaussian noise with dis-tribution N (0, r2I).

The relationship between the feature locationw.r.t. the camera in the camera frame (X

C

j/cam)and the feature location w.r.t. the target in the tar-get’s body frame (Xj = X

B

j/target) is given by:

XC

j/cam = RC/R ∗XR

j/cam

= RC/R ∗ (X

R

j/target + ttarget/cam)

= RC/R ∗ (R

R/B ∗XB

j/target + ttarget/cam)(6)

In Eq. 6, RR/B is the rotation matrix which ro-

tates vectors from the body frame B into the bodyframe R, and R

C/R is the rotation matrix whichrotates vectors from the reference frame R into thecamera frame C (if these frames are not the same).

4 Algorithm

To solve the estimation problem presented in theprevious section, a particle filter is implemented.The particle filter consists of many particles (re-ferred to by index i), each containing a hypothesisfor the kinematic state at the latest time step k, s[i]k .

Page 4: AUV/ROV POSE AND SHAPE ESTIMATION OF TETHERED …web.stanford.edu/group/arl/cgi-bin/drupal/sites... · ple cameras). It can be used with either stereo or monocular vision systems

For a given particle, each feature’s position can beestimated independently of the others [1, 21]. Thisalgorithm duplicates the approach of FastSLAM,which has each particle maintain a set of smallEKFs, one for each feature. Within each particlei, the distribution p(Xj |s[i]) is assumed to be Gaus-sian, N (µ[i]

j ,Σ[i]j ).

Each particle contains N EKFs; that is, for parti-cle i, every feature j has a mean µ[i]

j and covariance

Σ[i]j to estimate its true 3-D location in the body

frame (Xj).At every time-step, the particle filter proceeds

through three distinct updates: a measurement up-date, a weighting/resampling of the particles basedon their likelihoods, and a motion update to predictthe kinematic state at the next time step. These aredescribed in turn.

4.1 Measurement Updates

The algorithm depends on consistently tracking fea-tures over several images. Feature correspondence ishandled ‘upstream’ of the filter. Any type of pointfeatures can be used. SIFT features [4] were usedfor the experiments in this paper because they havea few beneficial properties. Each SIFT feature has aunique 128-vector descriptor which greatly aids cor-respondence. Also, SIFTs are somewhat robust toorientation (including out-of-plane) changes, mak-ing them useful for tracking on a rotating body.

During the SPEAR process, a library of previ-ously observed SIFT features is maintained. Whena new image is taken, each newly observed SIFTis compared against this library to find a potentialmatch. Three steps are performed to find the cor-respondence: a pixel window is applied to narrowthe search, then the standard SIFT correspondencealgorithm is applied [4], and finally a threshold isapplied on the dot product between the new SIFTand any potential match. See [3] for more informa-tion on the feature correspondence procedure.

To improve the number of SIFT features detectedin the underwater environment, Contrast LimitedAdaptive Histogram Equalization [22] (CLAHE) isapplied to the raw image, prior to feature detectiontaking place.

The pixel locations zj of the SIFT features in theimage plane are passed to the algorithm as mea-surements. In the case of monocular vision, a fea-ture’s 3-D location estimate cannot be initializeduntil the feature has been observed more than once;this is necessary in order to establish range from thebearings-only measurements. The feature initializa-

tion [3] is done via least-squares estimation.To update the features’ 3-D location estimates,

the measurement model (Eq. 5) is used to calculatethe predicted measurement and its Jacobians:

z[i]

j = g(s[i], µ[i]j ) (7)

GXj =∂g

∂Xj

(s[i],µ

[i]j

) (8)

Gs =∂g

∂s(s[i],µ

[i]j

) (9)

This is followed by a standard EKF measurementupdate, performed on the feature’s mean and co-variance (Eqns. 10-13). Note that r is the noisestandard deviation. This parameter is a measure ofthe importance with which the latest measurementis treated.

Q =(GsPGs

T +GXjΣ[i]j GXj

T + r2I)

(10)

K = Σ[i]j GXj

TQ−1 (11)

µ[i]j = µ

[i]j +K(zj − z

[i]

j ) (12)

Σ[i]j =

(I −KGXj

)Σ[i]j (13)

4.2 Particle Weighting and Resam-pling

Following the measurement update step, the parti-cle filter updates an importance weighting for eachof the particles. This is a measure of the likeli-hood of the particle, given the measurements ob-served up to the given time step. The importanceweighting of particle i is denoted w[i]. There aremany choices available for weighting measures. Forexample, inlier/outlier counts based on pixel errorthresholding have been used in previous SFM par-ticle filters [14,16].

Given the choice made above to model p(Xj |s[i])as ‘Gaussian-like’ (N (µ[i]

j ,Σ[i]j )), a natural choice of

weighting is the Gaussian likelihood (as is done inthe original FastSLAM algorithm [1,17]). The mea-surement of feature j, zj , has likelihood denotedw

[i]j , and calculated by:

w[i]j = |2πQ|− 1

2 exp

{−1

2(zj − z

[i]

j )TQ−1(zj − z[i]

j )}

(14)

Page 5: AUV/ROV POSE AND SHAPE ESTIMATION OF TETHERED …web.stanford.edu/group/arl/cgi-bin/drupal/sites... · ple cameras). It can be used with either stereo or monocular vision systems

The overall likelihood of a specific particle, w[i],is the cumulative likelihoods of that particle’s mea-surements:

w[i] =∏j

w[i]j (15)

Periodically, the particles are resampled, in orderto remove the least likely hypotheses, and focus onthe most likely. This algorithm resamples based ona calculation of effective sample size Neff [18]:

Neff =1∑

i (w[i])2(16)

When Neff is less than half the total number ofparticles, resampling takes place. In order to re-sample efficiently, low-variance resampling is used[17]. After resampling, all particles are set to equalweight.

4.3 Motion Updates

For each particle i, the proposed kinematic states[i](k+1) for the next time-step is drawn from the fol-

lowing proposal distribution:

s[i](k+1) ∼

[F(µ[i]

prop[7:9] , (Σ[i]prop[1,1])

−1)N (µ[i]

prop[4:12] ,Σ[i]prop[4:12,4:12])

](17)

This proposal distribution is a Fisher distribu-tion over the Euler angles, and a Gaussian distri-bution over the other components of the kinematicstate vector. It takes into account a motion modelprediction (f(s[i]k )), as well as the latest measure-ments taken (zj(k+1)) [2]. The covariance (Σ[i]

prop)and mean (µ[i]

prop) of the proposal distribution are:

Σ[i]prop =

∑j

G[i]T

s

(Q

[i]j

)−1

G[i]s + P−1

−1

(18)

µ[i]prop = Σ[i]

prop

∑j

G[i]T

s

(Q

[i]j

)−1 (zj − z

[i]j

)+f(s[i]k )

(19)where:

z[i]j = g(f(s[i]k ), µ[i]

j ) (20)

GXj =∂g

∂Xj

(f(s

[i]k ),µ

[i]j

) (21)

Gs =∂g

∂s(f(s

[i]k ),µ

[i]j

) (22)

Q[i]j = Rprop + G

[i]Xj

Σ[i]j G

[i]T

Xj(23)

Once the kinematic states have been proposed,the filter then advances to the next time-step andbegins the measurement update again.

5 Experiments

Field testing was conducted in conjunction withthe Monterey Bay Aquarium Research Institute(MBARI). Using the MBARI research vessel PointLobos and the ROV Ventana (shown in Figures 2and 3), underwater tethered targets were imaged.

Figure 2: MV Point Lobos

Figure 3: ROV Ventana

An example of an underwater target of interest isshown in Figure 4. No a priori information exists,and autonomous proximity operations are desired.The purpose of the field experiments is to track itspose in the water while simultaneously reconstruct-ing the shape of the target (a precursor to position-ing control algorithms).

Page 6: AUV/ROV POSE AND SHAPE ESTIMATION OF TETHERED …web.stanford.edu/group/arl/cgi-bin/drupal/sites... · ple cameras). It can be used with either stereo or monocular vision systems

Figure 4: Example of an Underwater Target

A set of images was obtained with the main sci-ence camera on Ventana, at 10 Hz rate. Samples ofthe sequence of 150 images, at 1 second intervals,are shown in Figure 5.

Figure 5: Sample Views from Image Sequence

Figures 6, 7, 8, 9, and 10 present the results ofapplying particle filter SPEAR on the set of imagesabove. The reconstructed target is shown in Figure6; compared with the views of the target shown inFigure 5, it shows accurate representation of the 3-Dshape.

The relative pose estimate time history is shownin Figures 7-10. Figure 7 shows the Euler angles,Figure 8 shows the angular rates, Figure 9 showsthe position, and Figure 10 shows the velocity.

Figure 6: Reconstruction of the Target

6 Summary

A technique has been presented to estimate thepose and 3-D shape simultaneously of an under-water tethered object. The algorithm describedhere is based on an efficient particle filter algorithmfrom the SLAM community (FastSLAM). In partic-ular, the algorithm was applied to a problem withbearings-only measurements and no a priori infor-mation about the target.

The results show the successful estimation of anunderwater target’s shape and pose, validating theparticle filter approach of FastSLAM as an effectivetool for the SPEAR problem.

Acknowledgements

The authors would like to thank the MontereyBay Aquarium Research Institute for support, andthe Stanford Graduate Fellowship and NASA grant#Z627401-A for partial funding. Additionally, theauthors would like to thank the MBARI pilots(Knute Brekke, Mike Burczynski, Craig Dawe, andD.J. Osborne) and crew of the Point Lobos for theirassistance in conducting ROV operations in supportof the research goals of this paper.

Page 7: AUV/ROV POSE AND SHAPE ESTIMATION OF TETHERED …web.stanford.edu/group/arl/cgi-bin/drupal/sites... · ple cameras). It can be used with either stereo or monocular vision systems

Figure 7: Orientation of the Mock Target

Figure 8: Angular Rate of the Mock Target

References

[1] M. Montemerlo, S. Thrun, D. Koller, andB. Wegbreit, “Fastslam: a factored solutionto the simultaneous localization and mappingproblem,” in Eighteenth National Conferenceon Artificial Intelligence. Menlo Park, CA,USA: American Association for Artificial Intel-ligence, 2002, pp. 593–598.

[2] ——, “Fastslam 2.0: An improved particle fil-tering algorithm for simultaneous localizationand mapping that provably converges,” in In-ternational Joint Conference on Artificial In-telligence, vol. 18, 2003, pp. 1151–1156.

[3] S. Augenstein and S. M. Rock, “Simultaneousestimation of target pose and 3-d shape usingthe fastslam algorithm,” in Proceedings of theAIAA Guidance, Navigation, and Control Con-ference. Chicago, IL: AIAA, August 2009.

[4] D. Lowe, “Distinctive image features fromscale-invariant keypoints,” International Jour-nal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004.

Figure 9: Position of the Mock Target

Figure 10: Velocity of the Mock Target

[5] J. Salvi, Y. Petillot, S. Thomas, and J. Aulinas,“Visual SLAM for Underwater Vehicles usingVideo Velocity Log and Natural Landmarks,”in Proceedings OCEANS 2008, 2008.

[6] R. Eustice, O. Pizarro, and H. Singh, “VisuallyAugmented Navigation for Autonomous Un-derwater Vehicles,” IEEE Journal of OceanicEngineering, vol. 33, no. 2, pp. 103–122, 2008.

[7] P.-M. Lee, B.-H. Jeon, and S.-M. Kim, “Vi-sual servoing for underwater docking of an au-tonomous underwater vehicle with one cam-era,” OCEANS 2003. Proceedings, vol. 2, pp.677–682 Vol.2, Sept. 2003.

[8] A. Plotnik and S. Rock, “Visual Servoingof an ROV for Servicing of Tethered OceanMoorings,” in Proceedings of the MTS/IEEEOCEANS Conference, Boston, MA, 2006.

[9] P. Jasiobedzki, S. Se, M. Bondy, R. Jakola, andS. MDA, “Underwater 3D Mapping and PoseEstimation for ROV Operations,” in Proceed-ings OCEANS 2008, 2008.

Page 8: AUV/ROV POSE AND SHAPE ESTIMATION OF TETHERED …web.stanford.edu/group/arl/cgi-bin/drupal/sites... · ple cameras). It can be used with either stereo or monocular vision systems

[10] Y. Ma, S. Soatto, J. Kosecka, and S. S. Sastry,An Invitation to 3-D Vision: From Images toGeometric Models. Springer, 2003.

[11] T. Broida, S. Chandrashekhar, and R. Chel-lappa, “Recursive 3-d motion estimation froma monocular image sequence,” IEEE Trans-actions on Aerospace and Electronic Systems,vol. 26, no. 4, pp. 639–656, Jul 1990.

[12] A. Azarbayejani and A. Pentland, “Recur-sive estimation of motion, structure, and focallength,” IEEE Transactions on Pattern Analy-sis and Machine Intelligence, vol. 17, no. 6, pp.562–575, Jun 1995.

[13] A. Chiuso, P. Favaro, H. Jin, and S. Soatto,“Structure from motion causally integratedover time,” IEEE Transactions on PatternAnalysis and Machine Intelligence, vol. 24,no. 4, pp. 523–535, 2002.

[14] M. Pupilli and A. Calway, “Real-time cameratracking using a particle filter,” in In Proc.British Machine Vision Conference, 2005, pp.519–528.

[15] D. Marimon, Y. Maret, Y. Abdeljaoued,and T. Ebrahimi, “Particle filter-basedcamera tracker fusing marker and fea-ture point cues,” in Proc. SPIE, vol.6508, no. 1, 2007. [Online]. Available:http://link.aip.org/link/?PSI/6508/65080O/1

[16] P. Chang and M. Hebert, “Robust tracking andstructure from motion through sampling baseduncertainty representation,” in Proceedings ofICRA ’02, May 2002.

[17] S. Thrun, W. Burgard, and D. Fox, Proba-bilistic Robotics (Intelligent Robotics and Au-tonomous Agents). MIT press, Cambridge,Massachusetts, USA, 2005.

[18] A. Doucet, N. De Freitas, and N. Gordon,Sequential Monte Carlo Methods in Practice.Springer-Verlag, 2001.

[19] M. Dissanayake, P. Newman, S. Clark,H. Durrant-Whyte, and M. Csorba, “A solu-tion to the simultaneous localization and mapbuilding (slam) problem,” IEEE Transactionson Robotics and Automation, vol. 17, no. 3, pp.229–241, Jun 2001.

[20] N. Fisher, T. Lewis, and B. Embleton, Statisti-cal analysis of spherical data. Cambridge UnivPr, 1993.

[21] K. P. Murphy, “Bayesian map learning in dy-namic environments,” in In Neural Info. Proc.Systems (NIPS), 1999, pp. 1015–1021.

[22] K. Zuiderveld, Graphics Gems IV. Cambridge,MA: Academic Press, 1994, ch. VIII.5, ”Con-trast Limited Adaptive Histogram Equaliza-tion”, pp. 474–485.