vision-based shipwreck mapping: on evaluating …jokane/papers/quacos+16b.pdfvision-based shipwreck...

10
Vision-Based Shipwreck Mapping: on Evaluating Features Quality and Open Source State Estimation Packages A. Quattrini Li, A. Coskun, S. M. Doherty, S. Ghasemlou, A. S. Jagtap, M. Modasshir, S. Rahman, A. Singh, M. Xanthidis, J. M. O’Kane and I. Rekleitis Computer Science and Engineering Department, University of South Carolina Email: [albertoq,yiannisr,jokane]@cse.sc.edu, [acoskun,dohertsm,sherving,ajagtap,modasshm,srahman,akanksha,mariosx]@email.sc.edu Abstract—Historical shipwrecks are important for many rea- sons, including historical, touristic, and environmental. Cur- rently, limited efforts for constructing accurate models are performed by divers that need to take measurements manually using a grid and measuring tape, or using handheld sensors. A commercial product, Google Street View 1 , contains underwater panoramas from select location around the planet including a few shipwrecks, such as the SS Antilla in Aruba and the Yongala at the Great Barrier Reef. However, these panoramas contain no geometric information and thus there are no 3D representations available of these wrecks. This paper provides, first, an evaluation of visual features quality in datasets that span from indoor to underwater ones. Second, by testing some open-source vision-based state estimation packages on different shipwreck datasets, insights on open chal- lenges for shipwrecks mapping are shown. Some good practices for replicable results are also discussed. I. I NTRODUCTION Historical shipwrecks tell an important part of history and at the same time have a special allure for most humans, as exemplified by the plethora of movies and artworks of the Titanic. Shipwrecks are also one of the top scuba diving attractions all over the world, see Fig. 1. Many historical shipwrecks are deteriorating due to warm, salt water, human interference, and extreme weather (frequent tropical storms). Constructing accurate models of these sites will be extremely valuable not only for the historical study of the shipwrecks, but also for monitoring subsequent deterioration. Currently, limited mapping efforts are performed by divers that need to take measurements manually using a grid and measuring tape, or using handheld sensors [1]—a tedious, slow, and sometimes dangerous task. Automating such a task with underwater robots equipped with cameras—e.g., Aqua [2]—would be extremely valuable. Some attempts have been performed by using underwater vehicles with expensive setup—e.g., Remote Operated Vehicles (ROV) [3], [4]. Autonomous mapping using visual data has received a lot of attention in the last decade, resulting in many research papers and open source packages published, supported by impressive 1 https://www.google.com/maps/streetview/#oceans Fig. 1. Aqua robot at the Pamir shipwreck, Barbados. demonstrations. However, applying any of these packages on a new dataset has been proven extremely challenging, because of two main factors: software engineering challenges, such as lack of documentation, compilation, dependencies; and algorithmic limitations—e.g., special initialization motions for monocular cameras, number of and sensitivity to parame- ters [5]. Also, most of them are usually developed and tested with urban settings in mind. This paper analyzes first different feature detectors and descriptors in several datasets taken from indoor, urban, and underwater domains. Second, some open source packages for visual SLAM are evaluated. The main contribution of this paper is to provide, based on this evaluation, insights on the open challenges in shipwreck mapping so that when designing a new mapping algorithm they are taken into consideration. The next section discusses research on shipwreck mapping. Section III presents an analysis of the visual feature quality. Section IV shows qualitative results of some visual SLAM al- gorithms. Finally, Section V concludes the paper by discussing some insights gained by this methods evaluation. II. RELATED WORK Different technologies have been used to survey shipwreck areas, including ROVs, AUVs, and diver held sensors. Nornes et al. [6] acquired from an ROV stereo images off the coast of

Upload: truongnhi

Post on 28-Mar-2018

222 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: Vision-Based Shipwreck Mapping: on Evaluating …jokane/papers/QuaCos+16b.pdfVision-Based Shipwreck Mapping: on Evaluating Features Quality and Open Source State Estimation Packages

Vision-Based Shipwreck Mapping: on EvaluatingFeatures Quality and Open Source State Estimation

PackagesA. Quattrini Li, A. Coskun, S. M. Doherty, S. Ghasemlou, A. S. Jagtap, M. Modasshir, S. Rahman, A. Singh,

M. Xanthidis, J. M. O’Kane and I. RekleitisComputer Science and Engineering Department,

University of South CarolinaEmail: [albertoq,yiannisr,jokane]@cse.sc.edu,

[acoskun,dohertsm,sherving,ajagtap,modasshm,srahman,akanksha,mariosx]@email.sc.edu

Abstract—Historical shipwrecks are important for many rea-sons, including historical, touristic, and environmental. Cur-rently, limited efforts for constructing accurate models areperformed by divers that need to take measurements manuallyusing a grid and measuring tape, or using handheld sensors. Acommercial product, Google Street View1, contains underwaterpanoramas from select location around the planet including a fewshipwrecks, such as the SS Antilla in Aruba and the Yongala atthe Great Barrier Reef. However, these panoramas contain nogeometric information and thus there are no 3D representationsavailable of these wrecks.

This paper provides, first, an evaluation of visual featuresquality in datasets that span from indoor to underwater ones.Second, by testing some open-source vision-based state estimationpackages on different shipwreck datasets, insights on open chal-lenges for shipwrecks mapping are shown. Some good practicesfor replicable results are also discussed.

I. INTRODUCTION

Historical shipwrecks tell an important part of history andat the same time have a special allure for most humans, asexemplified by the plethora of movies and artworks of theTitanic. Shipwrecks are also one of the top scuba divingattractions all over the world, see Fig. 1. Many historicalshipwrecks are deteriorating due to warm, salt water, humaninterference, and extreme weather (frequent tropical storms).Constructing accurate models of these sites will be extremelyvaluable not only for the historical study of the shipwrecks,but also for monitoring subsequent deterioration. Currently,limited mapping efforts are performed by divers that need totake measurements manually using a grid and measuring tape,or using handheld sensors [1]—a tedious, slow, and sometimesdangerous task. Automating such a task with underwaterrobots equipped with cameras—e.g., Aqua [2]—would beextremely valuable. Some attempts have been performed byusing underwater vehicles with expensive setup—e.g., RemoteOperated Vehicles (ROV) [3], [4].

Autonomous mapping using visual data has received a lot ofattention in the last decade, resulting in many research papersand open source packages published, supported by impressive

1https://www.google.com/maps/streetview/#oceans

Fig. 1. Aqua robot at the Pamir shipwreck, Barbados.

demonstrations. However, applying any of these packages on anew dataset has been proven extremely challenging, becauseof two main factors: software engineering challenges, suchas lack of documentation, compilation, dependencies; andalgorithmic limitations—e.g., special initialization motions formonocular cameras, number of and sensitivity to parame-ters [5]. Also, most of them are usually developed and testedwith urban settings in mind.

This paper analyzes first different feature detectors anddescriptors in several datasets taken from indoor, urban, andunderwater domains. Second, some open source packages forvisual SLAM are evaluated. The main contribution of thispaper is to provide, based on this evaluation, insights on theopen challenges in shipwreck mapping so that when designinga new mapping algorithm they are taken into consideration.

The next section discusses research on shipwreck mapping.Section III presents an analysis of the visual feature quality.Section IV shows qualitative results of some visual SLAM al-gorithms. Finally, Section V concludes the paper by discussingsome insights gained by this methods evaluation.

II. RELATED WORK

Different technologies have been used to survey shipwreckareas, including ROVs, AUVs, and diver held sensors. Norneset al. [6] acquired from an ROV stereo images off the coast of

Page 2: Vision-Based Shipwreck Mapping: on Evaluating …jokane/papers/QuaCos+16b.pdfVision-Based Shipwreck Mapping: on Evaluating Features Quality and Open Source State Estimation Packages

Trondheim Harbour, Norway, where M/S Herkules shipwrecksank. A commercially available software has been used toprocess the images to reconstruct a model of the shipwreck.In [3] a deepwater ROV is adopted to map, survey, sample, andexcavate a shipwreck area. Sedlazeck et al. [4], preprocessingimages collected by an ROV and applying a Structure fromMotion based algorithm, reconstructs a shipwreck in 3D. Theimages used for testing such an algorithm contained somestructure and a lot of background, where only water wasvisible.

Other works use AUVs to collect datasets and build ship-wrecks models. Bingham et al. [7] used the SeaBED AUV tobuild a texture-mapped bathymetry of the Chios shipwreck sitein Greece. Gracias et al. [8] deployed the Girona500 AUV forsurveying the ship ‘La Lune’ off the coast of Toulon, France.Bosch et al. [9] integrated an omnidirectional camera to anAUV to create underwater virtual tours.

Integrating inertial and visual data helps better estimatingthe pose of the camera, especially in underwater domain,where images are not as reliable as in ground environments.Hogue et al. [10] demonstrated shipwreck reconstruction withthe use of a stereo vision-inertial sensing device. Moreover,structured lights can provide some extra information to recoverstructure information of the scene. In [11], structured light wasused to aid the reconstruction of high resolution bathymetricmaps of underwater shipwreck sites.

Methods to process such datasets are becoming more andmore reliable. Campos et al. [12] proposed a method to recon-struct underwater surface by using range scanning technology.Given raw point sets, smooth approximations of surfaces totriangle meshes are performed. The method was tested onseveral datasets, including the ship La Lune. Williams etal. [13] described techniques to merge multiple datasets, whichinclude stereo images of a shipwreck off the coast of the Greekisland of Antikythera, collected during different expeditions.

However, usually these works collect data by teleoperatingthe robot and process the data offline. To automate the explo-ration task real-time methods for localizing the robot and atthe same time mapping the environment are necessary.

III. VISUAL FEATURE QUALITY

There are two main classes of visual SLAM techniques:sparse and dense. Sparse visual SLAM utilizes selected fea-tures to track the motion of the camera and reconstruct thescene. Dense visual SLAM uses segments of the image andattempts to reconstruct as much of the scene as possible.In [14] SURF features are used for localizing and mappingscenes in the underwater domain. Thus, it is important toidentify stable features to use. The quality of some featuredetectors, descriptors, and matchers in underwater domain isassessed by Shkurti et al. [15]. The influence of underwaterconditions such as blurring and illumination changes has beenstudied by Oliver et al. [16].

In the following, feature detectors and descriptors that areavailable as open source implementation in OpenCV are testedusing the default parameters. The tests are run on a subset

of images from several datasets from different domains (seeFig. 2), including:

• indoor environment, collected with a Clearpath Huskyequipped with a camera;

• outdoor urban environments, specifically the KITTIdataset [17];

• outdoor natural environments, in particular the DevonIsland rover navigation dataset [18];

• coral reefs, collected by a drifter equipped with a monoc-ular camera [19];

• and shipwrecks, collected with Aqua2 underwater robotequipped with front and back cameras.

Figs. 3-7 show: (a) the average number of detected features,(b) the number of inliers used for the estimated homographies,and (c) the number of images matched together. Each figurepresents the above measure for the different datasets used, withdifferent combinations of feature detectors and descriptorsavailable in OpenCV2. Note that the number of featuresdetected is a single frame is not necessarily a measure ofhow good a feature is. Many features cannot be matched withfeatures in subsequent frames, thus they do not contributeto the robot localization and the environment reconstruction.Other features are not stable changing location over time.Indeed, even if some methods are able to find many features,the number of inliers is generally relatively low. Some of thecombinations of feature detector and descriptor extractor arenot present in the figures, because no feature could be found.

In indoor environment, there are several combinations offeature detector/descriptor extractor that work well and thefeatures are quite stable as can be observed by the low standarddeviation in the number of images matched; see Fig. 3.

In outdoor urban environments, the distribution of numberof detected features and the number of inliers is similar tothe one in indoor environment; see Fig. 3 and Fig. 4. Thissimilarity can be explained by the fact that both classesof environments are quite rich in features and are similarlystructured. However, the number of matched images decreasesin the urban environment compared to the results from theindoor ones. One of the reasons is that in urban environmentsdynamic obstacles, such as cars and bikers, are often present inthe scene, violating the common assumption of static scenes.

In outdoor natural environment datasets, while the numberof features detected is high as there are many rocks in thescene, the number of inliers drops; see Fig. 5. The probabilityof mismatches is higher, given the fact that the terrain withrocks does not have any distinctive feature. This happens alsoin the coral reef dataset. The combinations of feature detectorand descriptor extractor have different distributions for thecoral reef dataset compared to the results for the above waterones.

In shipwreck datasets, the features number varies a lotover images—i.e., high variance in the number of detectedfeatures—compared to the other datasets; see Fig. 7. One of

2http://opencv.org

Page 3: Vision-Based Shipwreck Mapping: on Evaluating …jokane/papers/QuaCos+16b.pdfVision-Based Shipwreck Mapping: on Evaluating Features Quality and Open Source State Estimation Packages

Fig. 2. Characteristic images from the evaluated datasets, namely indoor, outdoor urban, outdoor natural, coral reefs, shipwreck.

the reasons is that in shipwrecks the illumination changesleading to situations in which no features can be detected.

In terms of feature detector and descriptor, in this evaluation,different combinations of them provide good results in indoorenvironment, including combinations of ORB, SIFT, SURF,and DAISY, as most of the images are matched. In theoutdoor urban environment, the combination of SURF/SIFTprovides the best results. In the outdoor natural environ-ment, BRISK/SIFT, FAST/SIFT, FAST/SURF, GFTT/SIFT,and Agast/SIFT show the highest number of images matched.In coral reefs, there are just SURF/SIFT and ORB/SURFwhich display many images matched. In shipwrecks,GFTT/DAISY, ORB/SURF, Agast/DAISY, SURF/DAISY,FAST/SIFT, FAST/DAISY, ORB/DAISY/ GFTT/SIFT, andBRISK/DAISY all present good results. This results suggestthat there are some feature detectors that work better in somespecific datasets than others.

IV. VISUAL SLAM QUALITATIVE EVALUATION

Six of the most promising open source packages are eval-uated on four different datasets. The packages have been se-lected to cover different types of state-of-the-art techniques—i.e., feature-based: PTAM [20], ORB-SLAM [21]; semi-directmethod: SVO [22]; direct methods: LSD-SLAM [23]; neural-based: RatSLAM [24]; global optimization: g2o [25].

The datasets used have been collected over the years insideand outside shipwrecks off the coast of Barbados, by employ-ing an Aqua2 underwater robot, a AUV equipped with an IMUand front/back cameras, and also by employing a GoPro 3DDual Hero System with two GoPro Hero3+ cameras operatedby a scuba diver; see Fig. 8. As the data were collected atdifferent times, the datasets contain images that have differentconditions, including lighting variations, variable visibility,different levels of turbidity, loss of contrast, and differentmotion types; thus, covering a broad spectrum of situationsin which the robot might find itself. These datasets are in“rosbag” format3 defined for ROS so that they can easilyexported and processed.

3http://www.ros.org/wiki/rosbag

Several tests were performed in order to tune reasonablyeach package’s parameters following all available suggestionsfrom the packages’ authors.

ORB-SLAM has shown the most promising results, beingable to track features over time, while LSD-SLAM, being amethod based on optical flow, is more affected by illuminationchanges. RatSLAM utilizes a learning process for adjustinghow neurons are triggered, thus it needs the robot to visitthe same place multiple times to improve the trajectory. SVOand PTAM work reasonably well in a small area resultingin a correct partial trajectory in the tested datasets. g2o isemployed in a small number of keyframes in some of themethods, including ORB-SLAM and LSD-SLAM, and reducesthe error in the trajectory and the map.

Shipwrecks are quite rich in texture because they arebiodiversity hotspots. This allows the methods to reasonablytrack the camera. However, mapping them presents a differentset of challenges compared to other scenarios—e.g., coralreef monitoring. While the robot is moving, illumination cangreatly change in overlapping scenes. This leads in most casesto a loss of localization; see Fig. 9. Applying image restorationtechniques such as the ones proposed in [26], [27] that can beused in real-time will be part of our future work.

In Fig. 10 a sample run of ORB-SLAM and LSD-SLAM isshown on a GoPro dataset. The diver held the camera facingdownwards and hovered over a shipwreck. As LSD-SLAM isa direct method that relies on pixels intensity, presence ofmoving objects like fishes affects its performance. Instead,ORB-SLAM, a feature-based method, is more robust to thepresence of dynamic obstacles. Indeed, ORB-SLAM showsa better trajectory than LSD-SLAM. However, a feature-based method provides a sparse reconstruction of the scene.Nevertheless, note that some errors are still present, becausefeatures could be detected over fishes and tracked for someframes. For a test of a larger set of open source packagesplease refer to [5].

V. DISCUSSION AND CONCLUSIONS

Shipwrecks provide an interesting testbed where to testmapping algorithms that need to be robust with respect to,

Page 4: Vision-Based Shipwreck Mapping: on Evaluating …jokane/papers/QuaCos+16b.pdfVision-Based Shipwreck Mapping: on Evaluating Features Quality and Open Source State Estimation Packages

illumination changes, lack of contrast, and presence of dy-namic obstacles. Providing a method that is robust to such adomain will improve the state of the art in state estimation inother domains. This preliminary test also highlights some goodpractices for replicable and measurable research in the field.Releasing the code as open-source allows other researchers totest and possibly adapt the methods. Indeed, there are manysolutions from other research groups that showed great per-formance without releasing the code—e.g., [28]—thus makingit hard to evaluate and use. Another important aspect is theplethora of parameters that need to be tuned for the methods,such as the number of tracked features and the number ofRANSAC iterations. As finding the optimal set of parametersis not easy, especially if the experimenter is not the developer,the recommended values for these parameters and the effectsinduced by their variation should be well documented. Amongthe special requirements of certain packages are some specialmotions that need to be performed to initialize the algorithm.For example PTAM requires a lateral translation, a motion dif-ficult to achieve with most vehicles. When collecting datasets,it is important to consider such special motions. Moreover,the availability of public datasets together with ground truthwould boost the proper evaluation and benchmarking of thepackages.

Ongoing work includes a more in-depth analysis of thefeature quality, a quantitative evaluation and comparison ofthe packages, the study of the effects of the parameters, andthe investigation of more open-source packages on the samedatasets. In addition, more data will be collected off the coastof South Carolina, and the datasets will be made public tofoster benchmarking. The result of this evaluation would serveas input for improving the state of the art for vision-based stateestimation method for shipwreck mapping, and possibly to theunderwater domain.

ACKNOWLEDGMENTThe authors would like to thank the generous support of a

Google Faculty Research Award and of the National ScienceFoundation grant (NSF 1513203).

REFERENCES

[1] J. Henderson, O. Pizarro, M. Johnson-Roberson, and I. Mahon, “Map-ping submerged archaeological sites using stereo-vision photogramme-try,” Int. Journal of Naut. Archaeology, vol. 42, pp. 243–256, 2013.

[2] J. Sattar, G. Dudek, O. Chiu, I. Rekleitis, P. Giguere, A. Mills, N. Pla-mondon, C. Prahacs, Y. Girdhar, M. Nahon, and J.-P. Lobos, “Enablingautonomous capabilities in underwater robotics,” in Proc. of IEEE/RSJInt. Conference on Intelligent Robots and Systems, 2008, pp. 3628–3634.

[3] F. Søreide and M. E. Jasinski, “Ormen Lange: Investigation and exca-vation of a shipwreck in 170m depth,” in Proc. of MTS/IEEE OCEANS,2005, pp. 2334–2338.

[4] A. Sedlazeck, K. Koser, and R. Koch, “3D reconstruction based onunderwater video from ROV Kiel 6000 considering underwater imagingconditions,” in Proc. of MTS/IEEE OCEANS, 2009, pp. 1–10.

[5] A. Quattrini Li, A. Coskun, S. M. Doherty, S. Ghasemlou, A. S. Jagtap,M. Modasshir, S. Rahman, A. Singh, M. Xanthidis, J. M. O’Kane, andI. Rekleitis, “Experimental comparison of open source vision based stateestimation algorithms,” in Int. Symp. on Experimental Robotics, 2016.

[6] S. M. Nornes, M. Ludvigsen, Øyvind Ødegard, and A. J. Sørensen,“Underwater photogrammetric mapping of an intact standing steel wreckwith ROV,” in Proc. of the Int. Federation of Automatic Control (IFAC),2015, pp. 206–211.

[7] B. Bingham, B. Foley, H. Singh, R. Camilli, K. Delaporta, R. Eustice,A. Mallios, D. Mindell, C. Roman, and D. Sakellariou, “Robotic toolsfor deep water archaeology: Surveying an ancient shipwreck with anautonomous underwater vehicle,” Journal of Field Robotics, vol. 27,no. 6, pp. 702–717, 2010.

[8] N. Gracias, P. Ridao, R. Garcia, J. Escartin, M. L’Hour, F. Cibecchini,R. Campos, M. Carreras, D. Ribas, N. Palomeras, L. Magi, A. Palomer,T. Nicosevici, R. Prados, R. Hegedus, L. Neumann, F. de Filippo, andA. Mallios, “Mapping the Moon: Using a lightweight AUV to surveythe site of the 17th century ship ‘La Lune’,” in Proc. of MTS/IEEEOCEANS, 2013, pp. 1–8.

[9] J. Bosch, P. Ridao, D. Ribas, and N. Gracias, “Creating 360◦ underwatervirtual tours using an omnidirectional camera integrated in an AUV,” inProc. of MTS/IEEE OCEANS – Genova, 2015, pp. 1–7.

[10] A. Hogue, A. German, and M. Jenkin, “Underwater environment recon-struction using stereo and inertial data,” in Proc. of IEEE Int. Conferenceon Systems, Man and Cybernetics, 2007, pp. 2372–2377.

[11] C. Roman, G. Inglis, and J. Rutter, “Application of structured lightimaging for high resolution mapping of underwater archaeological sites,”in Proc. of MTS/IEEE OCEANS – Sidney, 2010, pp. 1–9.

[12] R. Campos, R. Garcia, P. Alliez, and M. Yvinec, “A surface recon-struction method for in-detail underwater 3D optical mapping,” The Int.Journal of Robotics Research, vol. 34, no. 1, pp. 64–89, 2015.

[13] S. B. Williams, O. Pizarro, and B. Foley, Return to Antikythera: Multi-session SLAM Based AUV Mapping of a First Century B.C. Wreck Site.Springer Int. Publishing, 2016, pp. 45–59.

[14] J. Aulinas, M. Carreras, X. Llado, J. Salvi, R. Garcia, R. Prados, andY. R. Petillot, “Feature extraction for underwater visual SLAM,” in Proc.of MTS/IEEE OCEANS – Spain, 2011, pp. 1–7.

[15] F. Shkurti, I. Rekleitis, and G. Dudek, “Feature tracking evaluation forpose estimation in underwater environments,” in Proc. of the CanadianConference on Computer and Robot Vision, 2011, pp. 160–167.

[16] K. Oliver, W. Hou, and S. Wang, “Image feature detection and matchingin underwater conditions,” in Proc. SPIE 7678, Ocean Sensing andMonitoring II, 2010.

[17] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets Robotics:The KITTI Dataset,” The Int. Journal of Robotics Research, vol. 32,no. 11, pp. 1231–1237, 2013.

[18] P. T. Furgale, P. Carle, J. Enright, and T. D. Barfoot, “The Devon IslandRover Navigation Dataset,” Int. Journal of Robotics Research, vol. 31,no. 6, pp. 707–713, 2012.

[19] M. Xanthidis, A. Quattrini Li, and I. Rekleitis, “Shallow coral reefsurveying by inexpensive drifters,” in Proc. of the MTS/IEEE OCEANS– Shanghai, 2016, pp. 1–9.

[20] G. Klein and D. Murray, “Parallel tracking and mapping for small arworkspaces,” in IEEE and ACM Int. Symp. on Mixed and AugmentedReality, 2007, pp. 225–234.

[21] R. Mur-Artal, J. M. M. Montiel, and J. D. Tardos, “ORB-SLAM: AVersatile and Accurate Monocular SLAM System,” IEEE Trans. Robot.,vol. 31, no. 5, pp. 1147–1163, 2015.

[22] C. Forster, M. Pizzoli, and D. Scaramuzza, “SVO: Fast semi-directmonocular visual odometry,” in Proc. of IEEE Int. Conference onRobotics and Automation, 2014, pp. 15–22.

[23] J. Engel, T. Schps, and D. Cremers, “LSD-SLAM: Large-Scale Di-rect Monocular SLAM,” in European Conference on Computer Vision(ECCV), ser. Lecture Notes in Computer Science, D. Fleet, T. Pajdla,B. Schiele, and T. Tuytelaars, Eds. Springer Int. Publishing, 2014, vol.8690, pp. 834–849.

[24] D. Ball, S. Heath, J. Wiles, G. Wyeth, P. Corke, and M. Milford, “Open-RatSLAM: an open source brain-based SLAM system,” AutonomousRobots, vol. 34, no. 3, pp. 149–176, 2013.

[25] R. Kummerle, G. Grisetti, H. Strasdat, K. Konolige, and W. Burgard,“g2o: A general framework for graph optimization,” in IEEE Int. Conf.on Robotics and Automation, 2011, pp. 3607–3613.

[26] I. Vasilescu, C. Detweiler, and D. Rus, “Color-accurate underwaterimaging using perceptual adaptive illumination,” Autonomous Robots,vol. 31, no. 2, pp. 285–296, 2011.

[27] G. Bianco, M. Muzzupappa, F. Bruno, R. Garcia, and L. Neumann, “Anew color correction method for underwater imaging,” in ISPRS/CIPAWorkshop on Underwater 3D Recording and Modeling, 2015.

[28] J. Hesch, D. Kottas, S. Bowman, and S. Roumeliotis, “ConsistencyAnalysis and Improvement of Vision-aided Inertial Navigation,” IEEETransactions on Robotics, vol. 30, no. 1, pp. 158–176, 2014.

Page 5: Vision-Based Shipwreck Mapping: on Evaluating …jokane/papers/QuaCos+16b.pdfVision-Based Shipwreck Mapping: on Evaluating Features Quality and Open Source State Estimation Packages

(a)

(b)

(c)

Fig. 3. Number of features, of inliers, and of images matched; in the legend, it is reported the feature detector, the descriptor, and the matcher used in indoorenvironment.

Page 6: Vision-Based Shipwreck Mapping: on Evaluating …jokane/papers/QuaCos+16b.pdfVision-Based Shipwreck Mapping: on Evaluating Features Quality and Open Source State Estimation Packages

(a)

(b)

(c)

Fig. 4. Number of features, of inliers, and of images matched; in the legend, it is reported the feature detector, the descriptor, and the matcher used in outdoorurban environment.

Page 7: Vision-Based Shipwreck Mapping: on Evaluating …jokane/papers/QuaCos+16b.pdfVision-Based Shipwreck Mapping: on Evaluating Features Quality and Open Source State Estimation Packages

(a)

(b)

(c)

Fig. 5. Number of features, of inliers, and of images matched; in the legend, it is reported the feature detector, the descriptor, and the matcher used in outdoornatural environment.

Page 8: Vision-Based Shipwreck Mapping: on Evaluating …jokane/papers/QuaCos+16b.pdfVision-Based Shipwreck Mapping: on Evaluating Features Quality and Open Source State Estimation Packages

(a)

(b)

(c)

Fig. 6. Number of features, of inliers, and of images matched; in the legend, it is reported the feature detector, the descriptor, and the matcher used in coralreefs.

Page 9: Vision-Based Shipwreck Mapping: on Evaluating …jokane/papers/QuaCos+16b.pdfVision-Based Shipwreck Mapping: on Evaluating Features Quality and Open Source State Estimation Packages

1 2 3 4 5 6 7 8 910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989910010

110

210

310

410

510

610

710

810

911

00

50010001500200025003000350040004500

Num

bero

fdet

ecte

dfe

atur

es

1 FAST FREAK BruteForce2 ORB ORB BruteForce-Hamming3 GFTT FREAK BruteForce4 ORB AKAZE BruteForce5 SimpleBlob SIFT BruteForce6 BRISK AKAZE BruteForce7 MSER BRISK BruteForce-Hamming8 GFTT DAISY BruteForce9 MSD AKAZE BruteForce10 SURF SIFT BruteForce11 MSD LATCH BruteForce12 SIFT LATCH BruteForce13 FAST LATCH BruteForce14 GFTT LATCH BruteForce15 SIFT AKAZE BruteForce16 MSD SIFT BruteForce17 Star AKAZE BruteForce18 FAST ORB BruteForce-Hamming19 BRISK SURF BruteForce20 Agast BriefDescriptorExtractor BruteForce21 SURF BriefDescriptorExtractor BruteForce22 SIFT ORB BruteForce-Hamming23 MSER ORB BruteForce-Hamming24 ORB SURF BruteForce25 BRISK BriefDescriptorExtractor BruteForce26 Agast AKAZE BruteForce27 Agast ORB BruteForce-Hamming28 SURF AKAZE BruteForce29 SURF KAZE BruteForce30 Agast FREAK BruteForce31 SimpleBlob LATCH BruteForce32 SURF LATCH BruteForce33 ORB SIFT BruteForce34 ORB BRISK BruteForce-Hamming35 MSER LATCH BruteForce36 SimpleBlob BriefDescriptorExtractor BruteForce37 MSER SURF BruteForce

38 MSER DAISY BruteForce39 Star BRISK BruteForce-Hamming40 FAST BRISK BruteForce-Hamming41 SimpleBlob BRISK BruteForce-Hamming42 SimpleBlob SURF BruteForce43 Star FREAK BruteForce44 SimpleBlob DAISY BruteForce45 Star LATCH BruteForce46 MSD KAZE BruteForce47 BRISK KAZE BruteForce48 SURF ORB BruteForce-Hamming49 GFTT SURF BruteForce50 SIFT SURF BruteForce51 MSER BriefDescriptorExtractor BruteForce52 Star BriefDescriptorExtractor BruteForce53 SURF BRISK BruteForce-Hamming54 Star DAISY BruteForce55 SIFT DAISY BruteForce56 SURF FREAK BruteForce57 ORB FREAK BruteForce58 Agast DAISY BruteForce59 Agast BRISK BruteForce-Hamming60 ORB BriefDescriptorExtractor BruteForce61 GFTT ORB BruteForce-Hamming62 MSD ORB BruteForce-Hamming63 ORB KAZE BruteForce64 MSD BRISK BruteForce-Hamming65 MSD SURF BruteForce66 Star SIFT BruteForce67 SIFT KAZE BruteForce68 MSER FREAK BruteForce69 MSER SIFT BruteForce70 Agast KAZE BruteForce71 GFTT KAZE BruteForce72 SIFT BriefDescriptorExtractor BruteForce73 FAST SIFT BruteForce74 BRISK DAISY BruteForce

75 BRISK FREAK BruteForce76 Agast SURF BruteForce77 SIFT FREAK BruteForce78 FAST SURF BruteForce79 Star KAZE BruteForce80 MSER AKAZE BruteForce81 FAST AKAZE BruteForce82 SimpleBlob KAZE BruteForce83 SURF DAISY BruteForce84 Agast LATCH BruteForce85 GFTT BriefDescriptorExtractor BruteForce86 GFTT BRISK BruteForce-Hamming87 MSD FREAK BruteForce88 MSD BriefDescriptorExtractor BruteForce89 Star ORB BruteForce-Hamming90 FAST DAISY BruteForce91 SimpleBlob ORB BruteForce-Hamming92 MSER KAZE BruteForce93 SIFT BRISK BruteForce-Hamming94 BRISK BRISK BruteForce-Hamming95 BRISK LATCH BruteForce96 SimpleBlob AKAZE BruteForce97 GFTT AKAZE BruteForce98 SimpleBlob FREAK BruteForce99 Star SURF BruteForce100 BRISK ORB BruteForce-Hamming101 FAST KAZE BruteForce102 ORB DAISY BruteForce103 Agast SIFT BruteForce104 GFTT SIFT BruteForce105 BRISK SIFT BruteForce106 ORB LATCH BruteForce107 MSD DAISY BruteForce108 SIFT SIFT BruteForce109 SURF SURF BruteForce110 FAST BriefDescriptorExtractor BruteForce

(a)

1 2 3 4 5 6 7 8 910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989910010

110

210

310

410

510

610

710

810

911

0−200

0

200

400

600

800

Num

bero

finl

iers

1 FAST FREAK BruteForce2 ORB ORB BruteForce-Hamming3 GFTT FREAK BruteForce4 ORB AKAZE BruteForce5 SimpleBlob SIFT BruteForce6 BRISK AKAZE BruteForce7 MSER BRISK BruteForce-Hamming8 GFTT DAISY BruteForce9 MSD AKAZE BruteForce10 SURF SIFT BruteForce11 MSD LATCH BruteForce12 SIFT LATCH BruteForce13 FAST LATCH BruteForce14 GFTT LATCH BruteForce15 SIFT AKAZE BruteForce16 MSD SIFT BruteForce17 Star AKAZE BruteForce18 FAST ORB BruteForce-Hamming19 BRISK SURF BruteForce20 Agast BriefDescriptorExtractor BruteForce21 SURF BriefDescriptorExtractor BruteForce22 SIFT ORB BruteForce-Hamming23 MSER ORB BruteForce-Hamming24 ORB SURF BruteForce25 BRISK BriefDescriptorExtractor BruteForce26 Agast AKAZE BruteForce27 Agast ORB BruteForce-Hamming28 SURF AKAZE BruteForce29 SURF KAZE BruteForce30 Agast FREAK BruteForce31 SimpleBlob LATCH BruteForce32 SURF LATCH BruteForce33 ORB SIFT BruteForce34 ORB BRISK BruteForce-Hamming35 MSER LATCH BruteForce36 SimpleBlob BriefDescriptorExtractor BruteForce37 MSER SURF BruteForce

38 MSER DAISY BruteForce39 Star BRISK BruteForce-Hamming40 FAST BRISK BruteForce-Hamming41 SimpleBlob BRISK BruteForce-Hamming42 SimpleBlob SURF BruteForce43 Star FREAK BruteForce44 SimpleBlob DAISY BruteForce45 Star LATCH BruteForce46 MSD KAZE BruteForce47 BRISK KAZE BruteForce48 SURF ORB BruteForce-Hamming49 GFTT SURF BruteForce50 SIFT SURF BruteForce51 MSER BriefDescriptorExtractor BruteForce52 Star BriefDescriptorExtractor BruteForce53 SURF BRISK BruteForce-Hamming54 Star DAISY BruteForce55 SIFT DAISY BruteForce56 SURF FREAK BruteForce57 ORB FREAK BruteForce58 Agast DAISY BruteForce59 Agast BRISK BruteForce-Hamming60 ORB BriefDescriptorExtractor BruteForce61 GFTT ORB BruteForce-Hamming62 MSD ORB BruteForce-Hamming63 ORB KAZE BruteForce64 MSD BRISK BruteForce-Hamming65 MSD SURF BruteForce66 Star SIFT BruteForce67 SIFT KAZE BruteForce68 MSER FREAK BruteForce69 MSER SIFT BruteForce70 Agast KAZE BruteForce71 GFTT KAZE BruteForce72 SIFT BriefDescriptorExtractor BruteForce73 FAST SIFT BruteForce74 BRISK DAISY BruteForce

75 BRISK FREAK BruteForce76 Agast SURF BruteForce77 SIFT FREAK BruteForce78 FAST SURF BruteForce79 Star KAZE BruteForce80 MSER AKAZE BruteForce81 FAST AKAZE BruteForce82 SimpleBlob KAZE BruteForce83 SURF DAISY BruteForce84 Agast LATCH BruteForce85 GFTT BriefDescriptorExtractor BruteForce86 GFTT BRISK BruteForce-Hamming87 MSD FREAK BruteForce88 MSD BriefDescriptorExtractor BruteForce89 Star ORB BruteForce-Hamming90 FAST DAISY BruteForce91 SimpleBlob ORB BruteForce-Hamming92 MSER KAZE BruteForce93 SIFT BRISK BruteForce-Hamming94 BRISK BRISK BruteForce-Hamming95 BRISK LATCH BruteForce96 SimpleBlob AKAZE BruteForce97 GFTT AKAZE BruteForce98 SimpleBlob FREAK BruteForce99 Star SURF BruteForce100 BRISK ORB BruteForce-Hamming101 FAST KAZE BruteForce102 ORB DAISY BruteForce103 Agast SIFT BruteForce104 GFTT SIFT BruteForce105 BRISK SIFT BruteForce106 ORB LATCH BruteForce107 MSD DAISY BruteForce108 SIFT SIFT BruteForce109 SURF SURF BruteForce110 FAST BriefDescriptorExtractor BruteForce

(b)

1 2 3 4 5 6 7 8 910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989910010

110

210

310

410

510

610

710

810

911

0−100

0100200300400500600

Num

bero

fim

ages

mat

ched

1 FAST FREAK BruteForce2 ORB ORB BruteForce-Hamming3 GFTT FREAK BruteForce4 ORB AKAZE BruteForce5 SimpleBlob SIFT BruteForce6 BRISK AKAZE BruteForce7 MSER BRISK BruteForce-Hamming8 GFTT DAISY BruteForce9 MSD AKAZE BruteForce10 SURF SIFT BruteForce11 MSD LATCH BruteForce12 SIFT LATCH BruteForce13 FAST LATCH BruteForce14 GFTT LATCH BruteForce15 SIFT AKAZE BruteForce16 MSD SIFT BruteForce17 Star AKAZE BruteForce18 FAST ORB BruteForce-Hamming19 BRISK SURF BruteForce20 Agast BriefDescriptorExtractor BruteForce21 SURF BriefDescriptorExtractor BruteForce22 SIFT ORB BruteForce-Hamming23 MSER ORB BruteForce-Hamming24 ORB SURF BruteForce25 BRISK BriefDescriptorExtractor BruteForce26 Agast AKAZE BruteForce27 Agast ORB BruteForce-Hamming28 SURF AKAZE BruteForce29 SURF KAZE BruteForce30 Agast FREAK BruteForce31 SimpleBlob LATCH BruteForce32 SURF LATCH BruteForce33 ORB SIFT BruteForce34 ORB BRISK BruteForce-Hamming35 MSER LATCH BruteForce36 SimpleBlob BriefDescriptorExtractor BruteForce37 MSER SURF BruteForce

38 MSER DAISY BruteForce39 Star BRISK BruteForce-Hamming40 FAST BRISK BruteForce-Hamming41 SimpleBlob BRISK BruteForce-Hamming42 SimpleBlob SURF BruteForce43 Star FREAK BruteForce44 SimpleBlob DAISY BruteForce45 Star LATCH BruteForce46 MSD KAZE BruteForce47 BRISK KAZE BruteForce48 SURF ORB BruteForce-Hamming49 GFTT SURF BruteForce50 SIFT SURF BruteForce51 MSER BriefDescriptorExtractor BruteForce52 Star BriefDescriptorExtractor BruteForce53 SURF BRISK BruteForce-Hamming54 Star DAISY BruteForce55 SIFT DAISY BruteForce56 SURF FREAK BruteForce57 ORB FREAK BruteForce58 Agast DAISY BruteForce59 Agast BRISK BruteForce-Hamming60 ORB BriefDescriptorExtractor BruteForce61 GFTT ORB BruteForce-Hamming62 MSD ORB BruteForce-Hamming63 ORB KAZE BruteForce64 MSD BRISK BruteForce-Hamming65 MSD SURF BruteForce66 Star SIFT BruteForce67 SIFT KAZE BruteForce68 MSER FREAK BruteForce69 MSER SIFT BruteForce70 Agast KAZE BruteForce71 GFTT KAZE BruteForce72 SIFT BriefDescriptorExtractor BruteForce73 FAST SIFT BruteForce74 BRISK DAISY BruteForce

75 BRISK FREAK BruteForce76 Agast SURF BruteForce77 SIFT FREAK BruteForce78 FAST SURF BruteForce79 Star KAZE BruteForce80 MSER AKAZE BruteForce81 FAST AKAZE BruteForce82 SimpleBlob KAZE BruteForce83 SURF DAISY BruteForce84 Agast LATCH BruteForce85 GFTT BriefDescriptorExtractor BruteForce86 GFTT BRISK BruteForce-Hamming87 MSD FREAK BruteForce88 MSD BriefDescriptorExtractor BruteForce89 Star ORB BruteForce-Hamming90 FAST DAISY BruteForce91 SimpleBlob ORB BruteForce-Hamming92 MSER KAZE BruteForce93 SIFT BRISK BruteForce-Hamming94 BRISK BRISK BruteForce-Hamming95 BRISK LATCH BruteForce96 SimpleBlob AKAZE BruteForce97 GFTT AKAZE BruteForce98 SimpleBlob FREAK BruteForce99 Star SURF BruteForce100 BRISK ORB BruteForce-Hamming101 FAST KAZE BruteForce102 ORB DAISY BruteForce103 Agast SIFT BruteForce104 GFTT SIFT BruteForce105 BRISK SIFT BruteForce106 ORB LATCH BruteForce107 MSD DAISY BruteForce108 SIFT SIFT BruteForce109 SURF SURF BruteForce110 FAST BriefDescriptorExtractor BruteForce

(c)

Fig. 7. Number of features, of inliers, and of images matched; in the legend, it is reported the feature detector, the descriptor, and the matcher used inshipwreck.

Page 10: Vision-Based Shipwreck Mapping: on Evaluating …jokane/papers/QuaCos+16b.pdfVision-Based Shipwreck Mapping: on Evaluating Features Quality and Open Source State Estimation Packages

Fig. 8. Characteristic images from the evaluated datasets. AUV outside and inside wreck, Manual underwater outside and inside wreck.

Fig. 9. Sample of images captured by Aqua robot inside Bajan Queen shipwreck in an interval of 30 seconds.

Fig. 10. Sample run of ORB-SLAM (first and second figures) and LSD-SLAM (third and fourth figure) on a footage collected with GoPro.