learning camera localization via dense scene matching … · 2021. 6. 24. · learning camera...

3
Learning Camera Localization via Dense Scene Matching Supplementary Material Shitao Tang 1 Chengzhou Tang 1 Rui Huang 2 Siyu Zhu 2 Ping Tan 1 1 Simon Fraser University 2 Alibaba A.I Labs {shitao tang, chengzhou tang, pingtan}@sfu.ca {rui.hr, siting.zsy}@alibaba-inc.com This Supplementary provides the following contents: 1) The architecture of Net coords and Net conf as described in Sec.3.3.2 and Sec.3.3.3 of the main paper. 2) Additional analysis on the effectiveness of top K sorting strategy and the selection of parameters, i.e., image resolution and the number of scene images. In the end, we present more visu- alization results on the comparison of estimated coordinate maps with different methods. 1. Archtecture of Net coords and Net conf . Fig. 1 shows the architecture of Net conf and Net coords . The input of Net coods is a H l ×W l ×4K (K = 16 in imple- mentation) cost-coordinate volume formed by concatenat- ing the cost volume with 3D scene coordinates. As shown in Fig. 1, Net coods consists of 3 residual blocks [1] and one denseblock [2]. The residual blocks consist of 1 × 1 convolutional layer. It takes input of cost-coordinate vol- ume and generates a H l × W l × 64 coordinate feature map. Then, the scene coordinate map is estimated by the dense- block, which takes the concatenation of image features, co- ordinate features and the initial coordinate map up-sampled from last layer (if applicable). On the other hand, Net conf consists of 5 residual block with context normalization [3]. It takes the concatenation of the estimated scene coordinate map with the corresponding 2D pixel coordinate map and estimates a confidence score for each pixel. 2. Additional Analysis This section provides additional analysis of DSM. All the experiments are conducted on 7scenes dataset. The data processing and training process are the same as described in the main paper. At the inference time, We use 1 out of every 10 frames for each sequence. Pose accuracy, the percentage of predicted poses falling within the threshold (5 , 5cm), is used as the evaluation metric. Effects of correlation sorting. As described in Sec.3.3.1 of the main paper, one of the procedures in cost volume construction is sorting and selecting top K coordinates for each pixel from the correlation tensor. The motivation be- Image feat. H l × W l × 256 Concatenate Net conf Net coords In: 128(+3) out: 64 In: 192(+3) out: 64 In: 256(+3) out: 64 In: 320(+3) out: 64 In: 384(+3) out: 64 Conv 1 × 1 Conv 3 × 3 Initial coords. H l × W l × 3 Pixel coords. H l × W l × 2 ResBlock, in: 128, out: 128 1 × 1 ResBlock, in: 128, out: 128 1 × 1 ResBlock, in: 128, out: 128 1 × 1 In: 128, out: 1 In: 5, out: 128 Uncertainty H l × W l × 1 ResBlock, in: 128, out: 128 1 × 1 ResBlock, in: 128, out: 128 1 × 1 ResBlock, in: 64, out: 64 1 × 1 Cost-coordinate volume H l × W l × (16 × 4) In: 448(+3), out: 3 Scene coords. H l × W l × 3 In: 256 out: 64 × 3 Figure 1: Archtecture of Net conf and Net coords . We use residual block for Net conf and dense block for Net coords hind this operation is two-fold. Firstly, as the number of re- trieved scene images varies, top K selection results in a cost volume with a fixed size. Secondly, a sorted cost volume leads to a more accurate estimated coordinate map. To ver- ify the effectiveness of correlation sorting, we fix the scene image number to 5 and directly use the correlation tensor as the cost volume for coordinate map regression. The results are shown in Table. 1. It can be seen that the estimated pose accuracy improves by correlation sorting consistently on all sequences. Moreover, since top K sorting and selection re-

Upload: others

Post on 10-Sep-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Learning Camera Localization via Dense Scene Matching … · 2021. 6. 24. · Learning Camera Localization via Dense Scene Matching Supplementary Material Shitao Tang 1Chengzhou Tang

Learning Camera Localization via Dense Scene MatchingSupplementary Material

Shitao Tang1 Chengzhou Tang1 Rui Huang2 Siyu Zhu2 Ping Tan1

1Simon Fraser University 2Alibaba A.I Labs{shitao tang, chengzhou tang, pingtan}@sfu.ca {rui.hr, siting.zsy}@alibaba-inc.com

This Supplementary provides the following contents: 1)The architecture of Netcoords and Netconf as described inSec.3.3.2 and Sec.3.3.3 of the main paper. 2) Additionalanalysis on the effectiveness of top K sorting strategy andthe selection of parameters, i.e., image resolution and thenumber of scene images. In the end, we present more visu-alization results on the comparison of estimated coordinatemaps with different methods.

1. Archtecture of Netcoords and Netconf .Fig. 1 shows the architecture of Netconf and Netcoords.

The input of Netcoods is a H l×W l×4K (K = 16 in imple-mentation) cost-coordinate volume formed by concatenat-ing the cost volume with 3D scene coordinates. As shownin Fig. 1, Netcoods consists of 3 residual blocks [1] andone denseblock [2]. The residual blocks consist of 1 × 1convolutional layer. It takes input of cost-coordinate vol-ume and generates a H l×W l×64 coordinate feature map.Then, the scene coordinate map is estimated by the dense-block, which takes the concatenation of image features, co-ordinate features and the initial coordinate map up-sampledfrom last layer (if applicable). On the other hand, Netconfconsists of 5 residual block with context normalization [3].It takes the concatenation of the estimated scene coordinatemap with the corresponding 2D pixel coordinate map andestimates a confidence score for each pixel.

2. Additional AnalysisThis section provides additional analysis of DSM. All

the experiments are conducted on 7scenes dataset. The dataprocessing and training process are the same as described inthe main paper. At the inference time, We use 1 out of every10 frames for each sequence. Pose accuracy, the percentageof predicted poses falling within the threshold (5◦, 5cm), isused as the evaluation metric.Effects of correlation sorting. As described in Sec.3.3.1of the main paper, one of the procedures in cost volumeconstruction is sorting and selecting top K coordinates foreach pixel from the correlation tensor. The motivation be-

Image feat. Hl × Wl × 256

Concatenate

NetconfNetcoords

In: 128(+3) out: 64

In: 192(+3) out: 64

In: 256(+3) out: 64

In: 320(+3) out: 64

In: 384(+3) out: 64

Conv 1 × 1Conv 3 × 3

Initial coords. Hl × Wl × 3

Pixel coords.Hl × Wl × 2

ResBlock, in: 128, out: 128

1 × 1

ResBlock, in: 128, out: 128

1 × 1

ResBlock, in: 128, out: 128

1 × 1

In: 128, out: 1

In: 5, out: 128

Uncertainty Hl × Wl × 1

ResBlock, in: 128, out: 128

1 × 1

ResBlock, in: 128, out: 128

1 × 1

ResBlock, in: 64, out: 64

1 × 1

Cost-coordinate volumeHl × Wl × (16 × 4)

In: 448(+3), out: 3

Scene coords.Hl × Wl × 3

In: 256 out: 64 × 3

Figure 1: Archtecture of Netconf and Netcoords. We useresidual block for Netconf and dense block for Netcoords

hind this operation is two-fold. Firstly, as the number of re-trieved scene images varies, top K selection results in a costvolume with a fixed size. Secondly, a sorted cost volumeleads to a more accurate estimated coordinate map. To ver-ify the effectiveness of correlation sorting, we fix the sceneimage number to 5 and directly use the correlation tensor asthe cost volume for coordinate map regression. The resultsare shown in Table. 1. It can be seen that the estimated poseaccuracy improves by correlation sorting consistently on allsequences. Moreover, since top K sorting and selection re-

Page 2: Learning Camera Localization via Dense Scene Matching … · 2021. 6. 24. · Learning Camera Localization via Dense Scene Matching Supplementary Material Shitao Tang 1Chengzhou Tang

Chess Fire Heads Office Pumpkin Kitchen StairsNo sorting 0.82 0.74 0.85 0.72 0.43 0.58 0.05

Sorting 0.96 0.95 1.0 0.88 0.53 0.72 0.66

Table 1: Pose accuracy with/without top K correlation sort-ing. The estimated pose accuracy improves by correlationsorting consistently on all sequences.

Num. Chess Fire Heads Office Pumpkin Kitchen Stairs1 0.87 0.85 0.87 0.71 0.45 0.63 0.173 0.90 0.94 0.91 0.79 0.46 0.67 0.205 0.94 0.94 0.94 0.80 0.54 0.68 0.24

10 (?) 0.96 0.95 1.0 0.88 0.53 0.72 0.66

Table 2: Pose accuracy with respect to the number of sceneimages. The network is trained and tested with the corre-sponding number of scene images except the one with 10scene images. The notation (?) means we train the networkwith 5 scene images instead of 10 scene images.

Reso. Chess Fire Heads Office Pumpkin Kitchen Stairs192× 256 0.92 0.84 0.89 0.78 0.49 0.64 0.23384× 512 0.96 0.95 1.0 0.88 0.53 0.72 0.66

Table 3: Pose accuracy with respect to different image reso-lutions. In our implementation, We resize all images to res-olution of 384× 512 for better efficiency and performance.

sults in a fixed-size cost volume, we can use different sceneimage numbers for training and testing. During the train-ing process, the scene image number can be fixed for betterefficiency while for inference we can leverage more sceneimages for higher accuracy.Number of scene image. To show the effects of scene im-age number N , we change N from 1 to 10 in the train-ing and testing process to evaluate the pose accuracy. Themodel is re-trained with respect to the corresponding sceneimage number for N = 1, 3, 5. Since training with morethan 5 scene images leads to unacceptable GPU memoryconsumption, we still use 5 scene images in training whentesting with 10 scene images. As shown in Table 2, increas-ing N from 1 to 5 results in higher pose accuracy. In ad-dition, we can see that 10 scene images obtain higher per-formance than 5 scene images. This indicates that even ifthe model is trained with fewer scene images, leveragingmore scene images leads to better performance. Consider-ing the trade-off between performance and efficiency, weset N = 10 in the main paper.Image resolution. We test our model using 2 different im-age resolution size 192 × 256 and 384 × 512. As shownin Table. 3, we can see the resolution of 384 × 512 out-performs 192 × 256. A higher resolution than 384 × 512could consume more GPU memory and lead to slower run-ning time. Therefore, we resize all the images to 384× 512in our system for better efficiency.

More coordinate map visualization. Fig. 2 provides morevisualization results on the comparison of estimated coordi-nate maps from SANet, DASC++, and our method (DSM).In general, the coordinate maps produced by DSM recovermore details as in the ground truth and have fewer artifactsthan SANet and DSAC++.

References[1] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.

Deep residual learning for image recognition. In Proceedingsof the IEEE conference on computer vision and pattern recog-nition, pages 770–778, 2016. 1

[2] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kil-ian Q Weinberger. Densely connected convolutional networks.In Proceedings of the IEEE conference on computer visionand pattern recognition, pages 4700–4708, 2017. 1

[3] Kwang Moo Yi, Eduard Trulls, Yuki Ono, Vincent Lepetit,Mathieu Salzmann, and Pascal Fua. Learning to find goodcorrespondences. In Proceedings of the IEEE conference oncomputer vision and pattern recognition, pages 2666–2674,2018. 1

Page 3: Learning Camera Localization via Dense Scene Matching … · 2021. 6. 24. · Learning Camera Localization via Dense Scene Matching Supplementary Material Shitao Tang 1Chengzhou Tang

(a) Query (b) SANet (c) DSAC++ (d) DSM (e) G.T.

Figure 2: Coordinate map visualization for SANet, DSAC++ and DSM. The coordinate maps produced by DSM recovermore details as in the ground truth and have fewer artifacts than SANet and DSAC++.