real-time dense 3d reconstruction and camera tracking via

13
The Visual Computer manuscript No. (will be inserted by the editor) Real-time Dense 3D Reconstruction and Camera Tracking via Embedded Planes Representation Yanping Fu 1 · Qingan Yan 2 · Jie Liao 1 · Alix L.H. Chow 3 · Chunxia Xiao 1,* Received: date / Accepted: date Abstract This paper proposes a novel approach for ro- bust plane matching and real-time RGB-D fusion based on the representation of plane parameter space. In con- trast to previous planar-based SLAM algorithms esti- mating correspondences for each plane-pair indepen- dently, our method instead explores the holistic topol- ogy of all relevant planes. We note that by adopting the low-dimensionality parameter space representation, the plane matching can be intuitively reformulated and solved as a point cloud registration problem. Besides es- timating the plane correspondences, we contribute an efficient optimization framework, which employs both frame-to-frame and frame-to-model planar consistency constraints. We propose a global plane map to dynam- ically represent the reconstructed scene and alleviate accumulation errors that exist in camera pose tracking. We validate the proposed algorithm on standard bench- mark datasets and additional challenging real-world en- vironments. The experimental results demonstrate its outperformance to current state-of-the-art methods in tracking robustness and reconstruction fidelity. Keywords 3D reconstruction · RGB-D reconstruc- tion · Camera tracking · Yanping Fu E-mail: [email protected] · Qingan Yan E-mail: [email protected] · Jie Liao E-mail: [email protected] · Alix L.H. Chow E-mail: [email protected] · *Chunxia Xiao E-mail: [email protected] · 1 School of Computer Science, Wuhan University, China. 2 Silicon Valley Research Center, JD.com, United States. 3 Xiaomi Technology Co. LTD, China. *Corresponding to Chunxia Xiao. (d) (b) (e) (c) (a) Fig. 1: Reconstruction result of the proposed method on the dataset fr3/str tex far. (a) The RGB-D im- ages. (b) GPM. (c) The evaluation camera trajectory comparison with ground truth. (d) The reconstructed mesh without texture. (e) The reconstructed mesh with texture. 1 Introduction We are witnessing the vigorous progress of 3D recon- struction [4,6,25,39,40,38,43,42] in a variety of appli- cations, such as mixed reality, CAD manufacturing, 3D printing and robot perception, especially with the ubiq- uity of RGB-D sensors and powerful mobile devices. However, accurate 3D shape reconstruction is still quite challenging. It is not easy to design general-purpose re- construction algorithms to serve various objects and ex- pect all reconstructions to be in high quality. For ex- ample, the primitive shapes for offices and trees are dis- parate. In order to effectively acquire high fidelity ge- ometries, additional domain-specific knowledge is more desirable.

Upload: others

Post on 16-May-2022

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Real-time Dense 3D Reconstruction and Camera Tracking via

The Visual Computer manuscript No.(will be inserted by the editor)

Real-time Dense 3D Reconstruction and Camera Tracking viaEmbedded Planes Representation

Yanping Fu 1 · Qingan Yan 2 · Jie Liao 1 · Alix L.H. Chow 3 · Chunxia

Xiao 1,*

Received: date / Accepted: date

Abstract This paper proposes a novel approach for ro-

bust plane matching and real-time RGB-D fusion based

on the representation of plane parameter space. In con-

trast to previous planar-based SLAM algorithms esti-

mating correspondences for each plane-pair indepen-

dently, our method instead explores the holistic topol-

ogy of all relevant planes. We note that by adopting

the low-dimensionality parameter space representation,

the plane matching can be intuitively reformulated and

solved as a point cloud registration problem. Besides es-

timating the plane correspondences, we contribute an

efficient optimization framework, which employs both

frame-to-frame and frame-to-model planar consistency

constraints. We propose a global plane map to dynam-

ically represent the reconstructed scene and alleviate

accumulation errors that exist in camera pose tracking.We validate the proposed algorithm on standard bench-

mark datasets and additional challenging real-world en-

vironments. The experimental results demonstrate its

outperformance to current state-of-the-art methods in

tracking robustness and reconstruction fidelity.

Keywords 3D reconstruction · RGB-D reconstruc-

tion · Camera tracking

· Yanping FuE-mail: [email protected]· Qingan YanE-mail: [email protected]· Jie LiaoE-mail: [email protected]· Alix L.H. ChowE-mail: [email protected]· *Chunxia XiaoE-mail: [email protected] ·1School of Computer Science, Wuhan University, China.2Silicon Valley Research Center, JD.com, United States.3Xiaomi Technology Co. LTD, China.*Corresponding to Chunxia Xiao.

(d)

(b)

(e)

(c)(a)

Fig. 1: Reconstruction result of the proposed method

on the dataset fr3/str tex far. (a) The RGB-D im-

ages. (b) GPM. (c) The evaluation camera trajectory

comparison with ground truth. (d) The reconstructed

mesh without texture. (e) The reconstructed mesh with

texture.

1 Introduction

We are witnessing the vigorous progress of 3D recon-

struction [4,6,25,39,40,38,43,42] in a variety of appli-

cations, such as mixed reality, CAD manufacturing, 3D

printing and robot perception, especially with the ubiq-

uity of RGB-D sensors and powerful mobile devices.

However, accurate 3D shape reconstruction is still quite

challenging. It is not easy to design general-purpose re-

construction algorithms to serve various objects and ex-

pect all reconstructions to be in high quality. For ex-

ample, the primitive shapes for offices and trees are dis-

parate. In order to effectively acquire high fidelity ge-

ometries, additional domain-specific knowledge is more

desirable.

Page 2: Real-time Dense 3D Reconstruction and Camera Tracking via

2 Yanping Fu 1 et al.

Coplanarity serving as a specialized prior has shown

compelling capability in RGB-D structural modeling,

as flat surfaces are ubiquitous in reality. It provides a

high-level constraint for settling the challenge of drift

in pose estimation in addition to ICP registration [3].

Although planar-based reconstruction is a promising di-

rection, we have yet to see a simple and holistic solu-

tion for the problem of establishing putative planar cor-

respondences across consecutive frames. Current meth-

ods [5,7,10,21–23,27,31,35–37] find the correspondence

for each planar patch independently in a pairwise man-

ner, based on their intrinsic surface attributes or local

visual context. This is prone to neglect the mutual con-

sistency of planes and sensitive to outliers, even though

a global verification, e.g., RANSAC, is always followed

for punitively pruning incongruous mismatches. In the

case where the number of planar surfaces is rare or

the scene fails to provide sufficient discriminative in-

formation of texture, depth and normal on planar sur-

faces, such as uniformly painted furniture, walls and

corridors, any mistaken connect or rejection would be

lethal. PlaneMatch [33] uses the method of deep learn-

ing to match the planes, which also suffers from the

same problem.

The core novel aspect of this paper is that it ex-

plores the topology correlation of entire relevant planes

rather than the pairwise context between each plane

pair. We note that the detected planes can be intuitively

embedded into a low dimensional parameter space and

represented as a sparse point cloud. Consequently, the

problem of coplanarity matching can be cast as the

point cloud registration under different coordinates. In

this manner, we can easily merge over-segmented planes

and find globally consistent correspondences, facilitat-

ing camera tracking and reconstruction performances.

Thus our method focuses more on reconstructing the

scenes which possess plane information.

To improve the stability of the camera tracking, we

propose a joint plane constraint strategy, which adds a

frame-to-frame plane constraint and a frame-to-model

plane constraint to the camera tacking processing. Firstly,

we covert planes as point clouds in PPS and conduct

a point cloud alignment process to establish the corre-

spondence between planes in consecutive frames. Then

we add the frame-to-frame plane consistency to the ICP

method of the depth camera tracking. Secondly, we de-

sign a global plane map (GPM) to manage the plane

information of the reconstructed scene model incremen-

tally. So we can utilize the reconstructed scene as a

priori to reduce accumulation errors that exist in cam-

era pose tracking. We also employ the PPS correspon-

dence strategy to establish the frame-to-model plane

constraints between frame and GPM, which can fur-

ther alleviate the cumulative error of camera tracking.

Fig. 1 shows the result of the proposed method. The

experimental results on multiple datasets demonstrate

the effectiveness of the proposed method in both depth

camera tracking and 3D reconstruction.

In summary, our contributions are threefold:

– We propose a novel method to establish plane cor-

respondences between different frames using point

cloud registration in the plane parameter space (PPS).

– We introduce a frame-to-frame plane constraint and

a frame-to-model plane constraint to the traditional

ICP camera tracking method, which makes the depth

camera tracking more robust.

– We introduce a global plane map (GPM) to manage

all the planes in the reconstructed scene incremen-

tally, which can significantly alleviate the cumula-

tive error of the depth camera tracking.

2 Related Work

With the ubiquity of RGB-D sensors, significant progress

has been made in recent years aiming at high-fidelity

and efficient object modeling [12,15]. In this section, we

revisit some of the works that are closely related to our

approach.

RGB-D SLAM. A large number of RGB-D based

camera tracking and reconstruction methods have ap-

peared after the pioneering work KinectFusion [25]. Kintin-

uous [39] integrated the color information into standard

geometry-based fusion frameworks to improve camera

tracking robustness and reconstruction fidelity. [40] com-

bined the geometric consistency and the photometric

consistency to track the camera and corrected the re-

constructed model using non-rigid deformation. [24] pro-

posed an ORB-SLAM system, which used ORB [29]

features for real-time monocular and RGB-D camera

tracking. However, the above methods suffer from the

trouble of modeling texture-less and geometry-less re-

gions, which is ubiquitous in indoor environments, such

as corridors, floor, and walls.

Manhattan-based SLAM. Other methods use spa-

tial layout constraints to improve the robustness of cam-

era tracking. [19] exploited weak Manhattan constraints

to parse the structure of indoor environments for cam-

era tracking, which yielded better, temporally consis-

tent results. [11] parsed the building structures such

as floor and walls for monocular camera tracking. [30]

took advantage of scene box layout descriptor to in-

crease the life of the camera tracking under the oc-

clusion state. These above methods are based on the

RGB image structure analysis and inference. For mov-

ing cameras, color images are easy to be blurring and

Page 3: Real-time Dense 3D Reconstruction and Camera Tracking via

Real-time Dense 3D Reconstruction and Camera Tracking via Embedded Planes Representation 3

Input(RGB+D) Plane Extraction Point Cloud in PPS

Frame-to-Frame Plane

Matching

GPM Visualization

Reconstruction Result

GPM

Po

int

Clo

ud

Reg

istr

ati

on

Pla

ne-

ba

se C

am

era

Tra

ckin

g

(a) (b) (c) (d) (e)

Em

bed

ded

Pla

ne

Rep

rese

nta

tio

n

Frame-to-Model Plane Matching

Fig. 2: Pipeline of our method. (a) The input RGB-D images of the proposed method. (b) Plane extraction

results of consecutive frames and the GPM visualization (each color represents a plane surface). (c) Embedded

plane representation as a point cloud in the PPS (the same color with (c) as the corresponding plane). (d)

Plane matching results by point cloud registration in the PPS for consecutive frames and GPM (the same color

represents the matching plane pair). (e) The reconstruction result using the frame-to-frame and frame-to-model

plane constraints.

seriously affect the accuracy of spatial structure esti-

mation. Besides, the narrow view angle of the RGB-D

camera may not have enough information to infer the

spatial structure. [20] jointly solved scene layout esti-

mation and global registration for accurate indoor 3D

reconstruction, yet it cannot track the camera in real-

time.

Plane-based SLAM. To overcome the challenges faced

by standard RGB-D SLAM, [36,37] added both points

and planes as basic primitives, and the accuracy of

camera tracking is significantly improved. [8] applieda plane prior during signed distance field (SDF) calcu-

lation for 3D reconstruction denoising, stabilizing and

completing, which can recover clean surface shapes in

real-time. [31] presented a dense planar SLAM algo-

rithm by identifying and compressing arbitrary planes.

CPA-SLAM [22] used the plane constraint of key-frames

to the camera tracking and introduced a soft label to

refine the plane segmentation. [35] designed the statis-

tical information grid (STING) to extract plane fea-

tures in the plane parameter space and constructed a

plane association graph to determine correspondences

between successive frames. [32] developed an object-

oriented SLAM approach, making use of adaptive ob-

ject information. [23] proposed a key-frame-based dense

planar SLAM, they used a global factor graph to opti-

mize the poses of the key-frames and landmark planes. [17,

18] jointly estimated camera poses and planar land-

marks using the linear Kalman filter as well as the Man-

hattan world prior, which can effectively avoid the local

convergence problem caused by high-dimensional con-

vex optimization. However, these above methods inde-

pendently consider the optical plane correspondence in

pairwise fashion and suffer from the impact of accuracy

decrease caused by punitive pruning, e.g., RANSAC or

interpretation tree [10].

The work most closely related to the proposed method

is the method of PlaneMatch [33]. It used deep learning

to learn a descriptor for each planar patch using SDF

and color information. Then, it adopted this descrip-

tor to detect coplanar surfaces between disparate view-

points. Finally, indicator variables were used to prunefalse coplanar pairs and align potential true correspond-

ing planes. However, this method depends on the train-

ing datasets, and it requires available visual context

beside plane, thus suffering from the accuracy decrease

in the absence of texture and geometric details. In con-

trast, we adopt the idea of dimensionality reduction and

project all plane parameters into the plane parameter

space to construct a point cloud representation. Then

we use the point cloud registration to establish the cor-

respondence between the planes. In addition, we use a

frame-to-frame plane constraint and a frame-to-model

plane constraint to improve the camera tracking and

reconstruction.

In this work, we settle the challenge of plane match-

ing from a more holistic perspective, investigating all

the relevant planes within a point cloud registration

framework. This enables our algorithm to establish more

appropriate and complete associations for different planes,

especially in challenging real-world environments. Fur-

thermore, the matching strategy can also be extended

Page 4: Real-time Dense 3D Reconstruction and Camera Tracking via

4 Yanping Fu 1 et al.

(a) (b)

Fig. 3: Using plane parameter constraints to denoise.

(a) The origin point cloud with noises. (b) The point

cloud after plane denoising.

for other high-level primitive-based camera tracking prob-

lems, such as object-based [32,41,44,45], contour-based [46],

line-based [28,47] and layout-based [19,30] approaches.

3 Method

Given a sequence of RGB-D frames Ii, Di, where Iidenotes the i-th color image, Di is the accompanying

depth image. The goal of this paper is to establish ro-

bust plane associations across consecutive frames, fa-

cilitating the fidelity of camera tracking and 3D recon-

struction in real-time. Fig. 2 shows a brief overview of

the proposed method.

The camera poses Ti of the i-th frame can be rep-

resented as:

T =

[R t

0 1

], (1)

where R denotes the 3 × 3 rotate matrix and t is the

3 × 1 translation vector. We express a 3D vertex as

v = [x, y, z]T.

3.1 Embedded Plane Representation

We use the Hessian plane parameter to represent a

plane. The parameter of each plane can be represented

as Π = [n, d]T, where n = [nx, ny, nz]T is the normal-

ized normal vector of the plane and d is a scalar indi-

cating the distance from the origin to the plane. For a

vertex set v in the Cartesian coordinate space, if the

vertex v satisfies nTi v + di = 0, we consider that it lies

on the plane Πi.

We extract planar surfaces in each depth frame via

PEAC [9], which utilizes the Agglomerative Hierarchi-

cal Clustering (AHC) for fast plane extraction for each

depth frame. In addition to the plane parameters n and

d, we also evaluate the average color c and centroid m

of the extracted plane.

Since the depth image acquired by the depth camera

contains a lot of noises, these noises will affect the plane

parameter evaluation. To make the extracted planes pa-

rameter more accurate in the depth frame, we use the

spatial relationship, e.g., parallelism and perpendicular-

ity constraints, to optimize the plane parameters. We

adopt the method of [13] to detect the plane spatial

relationship between planes. We thus formulate the fol-

lowing energy minimization to refine the plane param-

eter of each plane:

E(n, d) = Es + λfEf , (2)

where

Ef =

#planes∑i

#verts∑j

(nTi vj + di)

2, (3)

Es =∑i,j∈Φ

(∣∣|nT

i nj∣∣ |−1)2 parallel, i 6= j 6= g

(nTi nj)

2 orthogonal, i 6= j 6= g

0 otherwise

, (4)

where Ef measures the plane fitting error, which makes

sure that the fitted parameter contains as many ver-

tices as possible on the planes. Es encourages the spa-

tial relationship between planes. #planes represents the

number of the planes in the depth frame and #verts

represents the vertices number on the plane. We set

λf = E0f/E

0s to balance the contribution of each term.

If there is no spatial relationship (parallelism and per-pendicularity) between planes, Eq. 2 degenerates into a

plane parameter fitting process.

If a vertex v belongs to a plane Π = [n, d]T, it

should satisfy: f (v) = nxx + nyy + nzz + d = 0. Due

to noises, f = 0 is often not fully satisfied. Moreover,

for the concern of noises, we use the optimized plane

parameter to refine plane vertices via Eq. 5. This re-

fined depth image can be used to improve depth camera

tracking. The denoising result is shown in Fig. 3.

x′ = x− nxf

y′ = y − nyf

z′ = z − nzf

. (5)

As described in [26] and [35], each point in the PPS

corresponds to a plane in the Cartesian space. Con-

versely, if projecting the detected planes of the scene

into the PPS, it is therefore able to acquire a point cloud

for each frame (each point corresponding to a plane) in

Page 5: Real-time Dense 3D Reconstruction and Camera Tracking via

Real-time Dense 3D Reconstruction and Camera Tracking via Embedded Planes Representation 5

the PPS, where coordinates [θp, ϕp, dp] for each point

in PPS can be expressed as follows:θp = arccos(npz)

ϕp = atan2(npy, npx)

dp = dp, (6)

where θp ∈ [0, π], ϕp ∈ (−π, π] and p indicates the plane

index.

3.2 Plane Matching

To estimate the camera pose of each frame accurately,

we have to find the plane correspondences between con-

secutive frames. Here we propose a novel plane match-

ing method by performing point cloud registration in

the PPS. As all planes in one frame can be aligned to

corresponding planes in another frame through a rigid

transformation matrix, there must also exist a trans-

formation matrix in the PPS, which can associate the

point clouds from two different frames. Conversely, if

we could register two point clouds in the PPS, then

we would get its corresponding relationship between

planes within different frames. To effectively find ac-

curate plane correspondence, we propose a novel plane

matching method by point cloud registration in the

plane parameter space. After the corresponding rela-

tions between planes are obtained, we propose a joint

optimization framework to evaluate the camera poses,

which considers the frame-to-frame plane consistency

between consecutive frames and the frame-to-model plane

consistency between frames and the global plane map

(GPM), as shown in Fig. 4.

3.2.1 Frame-to-Frame Plane Matching

Given a set of planes extracted from two consecutive

frames, the plane matching is to establish the frame-to-

frame plane correspondences between them. We project

the extracted planes in each frame into the PPS via

Eq. 6 to obtain two point clouds, and establish connec-

tions by registering the two set of point clouds. As the

measurement scales between θp, ϕp and d are different,

we divide the point cloud registration into two steps

to avoid the registration error caused by scale inconsis-

tency, as illustrated in Fig. 5.

If two planes are parallel in the same frame, they

would possess similar normal vectors, whereas the dis-

tances from the planes to the origin are different. So

we project all the points in the PPS to the θ-ϕ coordi-

nate plane to ignore the influence of distance d. Ideally,

the parallel planes will fall into the same point. Due to

the noise of the depth, the extracted planes may not

(a) Consecutive Frames

(b) Plane Extraction

(d) Point Clouds in Local PPS

(f) Frame-to-Frame Plane Matching

(e) Point Clouds in Global PPS

(g) Frame-to-Model Plane Matching

(c) GPM

Fig. 4: Plane matching using the cloud point register

in the PPS. (a) RGB-D images. (b) Plane extraction.

(c) The visualization of GPM using the method of [1].

(d) The point clouds of consecutive frames in the PPS.

(e) The point clouds of current frame and GPM in

the global PPS. (f) The frame-to-frame plane match-

ing between consecutive frames (the same color means

the corresponding plane). (g) The frame-to-model plane

matching between current frame and the GPM.

(a) (c)(b)

i

ji j

Wi1

Wi3

Wi2

Wj3

Wj2Wj1

Fig. 5: Cloud register in the PPS using PDA. (a) The

clustering result for the target point cloud in the PPS

and the corresponding projection point cloud on the

θ-ϕ plane. (b) The weights between the source point

and the points in the corresponding plane cluster for

the PDA registration. (c) The source point cloud in the

PPS.

be absolutely accurate, the parallel planes will fall into

a close region on the θ-ϕ coordinate plane, as shown

in Fig. 5(a). Therefore, we project all the points in

the PPS to the θ-ϕ coordinate plane to generate a 2D

point cloud. Through experiments, we use the cluster-

ing method with the tolerance 0.05 to merge the points

close to each other on the θ-ϕ coordinate plane, as

Page 6: Real-time Dense 3D Reconstruction and Camera Tracking via

6 Yanping Fu 1 et al.

shown in Fig. 5(a). A cluster means that all the planes

in the cluster group are parallel to each other.

Firstly, we register the two 2D point clouds to ignore

the influence of the distance d on the θ-ϕ coordinate

plane. In this way, we find a 2× 2 rotation matrix and

a 2× 1 translation vector to align two point clouds on

the θ-ϕ coordinate plane. After alignment, we consider

that if the distance between two corresponding points is

greater than 0.05 on the θ-ϕ coordinate plane, we think

they are not a valid correspondence and ignore it.

After we get the initial point correspondences, we in-

troduce a Probabilistic Data Association (PDA) point

cloud registration method [2]. According to ICP, each

point in the source point cloud is associated only with

one point in the target point cloud, while in the PDA

it is associated a point in the source point cloud with a

set of candidate points in the target cloud with differ-

ent weights. In the second step, we modify PDA point

cloud registration to accommodate our plane match-

ing method in PPS, which can effectively align two

point clouds along the axis d. Firstly, we use the ini-

tial plane correspondences in the 2D registration. If we

match a point in the cluster group, it means that the

planes corresponding to this plane may be in this paral-

lel planes cluster. Therefore, the probability of all points

in the cluster group needs to be considered. Since all

the planes in the same cluster are parallel, we use the

distance between d to construct the probability for the

points in the matching cluster, as shown in Fig. 5(b).

We define wi,k = e−((di−dk)/dmax)2

, where dmax is the

maximum distance d between the point i and the point

k in the match cluster, i is the point in source point

cloud and k is the point in corresponding cluster in tar-

get. Then we use the PDA for point cloud registration

to find a translation vector along the axis d. After sev-

eral iterations or convergence, we choose the point with

the highest probability in the matching cluster as the

matching point pairs. The corresponding point pairs are

the valid plane correspondences between frames.

According to [7,35], a validate camera transforma-

tion matrix can be obtained through at least three non-

linear dependent pairs of matching planes. Otherwise,

it will lead to the problem of under-constraint. We can

use the clustering result to determine whether existing

under-constraint. Firstly, if the cluster number of planes

is less than three, that means there are linearly inde-

pendent planes which are insufficient. Besides, we can

infer which planes are coplanar according to the clus-

tering and distance between planes and merge them as

one plane, as shown in Fig. 6.

(a) (b) (c)

Fig. 6: Merging coplanar plane as one plane using point

cloud cluster in PPS. (a) The input RGB-D images.

(b) Initial plane extraction results. (c) Coplanar plane

merging result by cluster in PPS.

3.2.2 Frame-to-Model Plane Matching

In order to use the planar structure of the whole scene

to guide the camera tracking and 3D reconstruction,

we introduce a global plane map (GPM) to manage

all the planes in the scene incrementally. The GPM is

an abstract representation of scene by planes, which

contains the updated plane attributes, i.e., [n, d]T , the

average color c and the centroid m, from all the previ-

ous frames in an accumulated form. We use the local

plane to global plane constraints substitute for frame-

to-model constraints to alleviate the camera drifting.

We adopt the matching strategy, similar to the frame-

to-frame plane matching, to find correspondences be-

tween the current frame and the GPM. To avoid the is-

sue that thin plates may not be correctly matched due

to normal flipping from the front to the back scanning,

we adjust the normal direction n of all planes to point to

the origin of the global coordinate system before frame-

to-model plane matching. Through this plane matching

step, we can enforce an additional frame-to-model con-

straint for each frame.

3.3 Depth Camera Tracking

Using these frame-to-frame and frame-to-model plane

correspondence constraints, we construct our tracking

objective function as follows:

E(T) = Egeo + λ1Eloc + λ2EGPM. (7)

Egeo represents the standard ICP tracking based on

geometric consistency between consecutive frames. This

term enables stability in the case when we cannot ac-

quire enough plane correspondences. We use the point-

to-plane metric to measure the geometric consistency

term:

Egeo =∑

(i,j)∈Ω

((Ttvi −Tt−1vj)

Tni)2, (8)

Page 7: Real-time Dense 3D Reconstruction and Camera Tracking via

Real-time Dense 3D Reconstruction and Camera Tracking via Embedded Planes Representation 7

where t represents the index of frame. Tt is the camera

pose of the t-th frame. (i, j) ∈ Ω is the corresponding

pixel set between consecutive frames by projective data

association [25]. vi and vj indicate the 3D vertices cor-

responding to the pixel i, j on frame t and frame t− 1

respectively. ni is the normal of the vertex vi in the

global coordinate system.

Eloc represents the local frame-to-frame plane con-

sistency constraint term. This term makes the corre-

sponding planes between consecutive frames align as

much as possible. We use the plane-to-plane metric to

measure the error between corresponding plane pairs.

That means the vertices on the plane should be dropped

onto the corresponding plane through the camera trans-

formation between consecutive frames. To ensure the

accuracy of depth camera tracking, we measure it bidi-

rectionally.

Eloc =∑

(i,j)∈Φ

Ni∑k

(nTj Ttvk+dj)

2+

Nj∑l

(nTi T(t−1)vl+di)

2

,

(9)

where (i, j) ∈ Φ represents the corresponding plane set,

i is the plane index in current frame and j in previous

frame. [ni, di]T indicates the plane parameter of plane i

in the global coordinate. k and l are the index of vertices

on the plane i and plane j. Ni is the total number of

vertices on the plane i.

EGPM is the term of the global frame-to-model plane

consistency, which ensures the corresponding planes be-

tween the current frame and the GPM aligned as much

as possible.

EGPM =∑

(i,j)∈Φ

Ni∑k

(nTj Tivk + dj)

2, (10)

where i is the plane index in the current frame and j

is the corresponding plane index in the GPM. [nj , dj ]T

is the plane parameter of plane j of GPM in the global

coordinate system.

λ1 and λ2 are balance coefficients. As mentioned

above, if the under-constraint problem occurs, i.e., the

matching plane pairs are less than the given threshold,

we set λ1 = λ2 = 0 to cancel plane constraint, otherwise

λ1 = 5 and λ2 = 10 to strengthen plane constraints. We

utilize the Gaussian-Newton method to solve Eq. 7.

3.4 Reconstruction and Map Updating

After obtaining the camera pose of each frame, we in-

tegrate all the depth data into the global reconstructed

model. In order to maintain the GPM better and re-

fine the reconstructed model in real-time, our system is

deployed based on ElasticFusion [40]. Rather than com-

monly used SDF representation, ElasticFusion utilizes

a point-based representation, referred to surfel, to rep-

resent the reconstructed model. Surfel uses an ellipse to

describe the local planar region without any topological

information, which makes it easy to be added, deleted

and corrected in real-time. We also use the proposed

plane match method to detect the loop closure between

the current frame and the GPM, then adopt the method

of ElasticFusion to non-rigidly adjust the surfels of the

reconstructed model to generate a globally consistent

surfel-based reconstructed model.

We update the GPM to manage all the planes in the

scene incrementally. If the plane in the current frame

cannot find a matching plane in GPM, we consider it

as a new one. For each new plane, we transform it into

the global coordinate system. Then we add this plane

to the GPM as a new plane. In contrast, if a plane in

the current frame finds a correspondence plane in the

GPM, we merge it to the corresponding plane in the

GPM.

To lessen error accumulation in the integration, we

utilize a similar strategy as ElasticFusion. Besides the

basic properties of each surfel, such as radius, loca-

tion, color and normal, we add an additional plane la-

bel attribute to indicate which plane it belongs to and

a counter attribute records the times of the surfel be-

long to the same plane. When the counter of each surfel

reaches 10, we consider that the surfel belongs to the

label plane is stable. In the following reconstruction, we

will not change the surfel label. After every 100 frames,

we use the all stable surfels belonging to the same plane

to filter and update the plane parameter in GMP. In

this way, it is able to significantly alleviate the camera

drifting. Even if the camera poses evaluated by some

frames are inaccurate, they can still be corrected by

the global plane constraint of the subsequent frames.

4 Results

We perform qualitative and quantitative analysis of our

methods on several RGB-D datasets, including ICL-

NUIM [14], TUM RGB-D [34] and RGB-D sequences

acquired by ourselves. To be run in real-time, we de-

ployed the whole process of the proposed method on

GPU. All experiments were conducted on a PC with

Intel Core i7 CPU, 16GB RAM, and GeForce GTX

1060 6G. We compared our method with other state-

of-the-art RGB-D SLAM and planar-based methods,

including ORB-SLAM2 [24], ElasticFusion [40], KDP-

SLAM [23], LPVO [18] and LSLAM [17]. The experi-

Page 8: Real-time Dense 3D Reconstruction and Camera Tracking via

8 Yanping Fu 1 et al.

Ela

stic

Fu

sion

OR

B-S

LA

M2

Ou

rs

(a) (b)

Fig. 7: Comparison results with ElasticFusion [40] and ORB-SLAM2 [24] on the TUM dataset. (a)

’fr3/structure texure near’ and (b) ’fr3/cabinet’. The results show the reconstructed mesh, texture and the camera

trajectory compared with the ground truth.

mental results show that our method outperforms other

methods, especially in challenge scenarios.

4.1 TUM RGB-D Dataset

We select several challenging datasets from [34] to demon-

strate the effectiveness and robustness of our method.

These datasets contain planar structures, lacking geo-

metric details or texture details, even lacking both of

them, which are very challenging for current SLAM and

reconstruction algorithms. We report the root mean

square error (RMSE) of absolute trajectory error (ATE)

as the metric between estimated camera trajectories

and ground truth (GT) in Table 1. The evaluations of

ORB-SLAM2 [24] and ElasticFusion [40] are based on

the codes provided by these authors. For other methods,

the code of these methods has not to be released to the

public, so we directly use the report provided by their

papers. From Table 1, we can see that our method per-

forms better on most datasets than the other methods.

The reason is that our frame-to-frame and frame-to-

model plane constraints effectively prevent the camera

tracking from excessively sliding even in the absence

of geometric and texture details. Moreover, the frame-

to-model plane constraints can effectively alleviate the

cumulative error by the GPM. From Table 1, we find

that the proposed method does not perform better on

the dataset str notex near and str tex near. The reason

is that the camera poses among these datasets are too

close to the scanned object. That means, some images

may only observe a very limited portion of the scene and

have very few detected planes. This will potentially lead

to the problem of under-constrain. To overcome this

problem, we set the weight of the plane constraint to

zero, so in such case the whole algorithm would degen-

erate into the common ICP algorithm, and accordingly

the accuracy of camera tracking would decrease.

To demonstrate the effectiveness of our approach,

we compare the reconstructed results and camera tra-

jectories of the proposed method with ElasticFusion

and ORB-SLAM2 [24], as shown in Fig. 7. From the

evaluated camera trajectory, we can see that the esti-

mated camera pose of the proposed method is more

accurate than ElasticFusion. Although ORB-SLAM2

shows better performance on the texture-rich dataset,

such as Fig. 7(a), camera tracking failures are prone

to occur in plane scene without textures, as shown in

Fig. 7(b). In contrast, our method is able to reduce the

accumulation errors that benefits from the planar con-

straint of GPM.

We conduct more experiments on the TUM RGB-

D dataset [34]. Although it is not specifically designed

for evaluating structure-based SLAM methods, it shows

some more general indoor environments. We compare

the proposed method with several state-of-the-art meth-

ods, including Kintinous [39], CPA-SALM [22], Elas-

ticFusion [40], BundleFusion [6], DVO [16] and Plane-

Match [33]. Table 2 reports the RMSE (the smaller is

better) of each method, and our method exhibits com-

parable and even better performance in these scenes,

including the closely related data-driven method Plane-

Match [33].

Fig. 8 shows the reconstructed results of our method

and the comparison between our evaluated camera tra-

jectories and the ground truth from the TUM RGB-D

dataset [34].

Page 9: Real-time Dense 3D Reconstruction and Camera Tracking via

Real-time Dense 3D Reconstruction and Camera Tracking via Embedded Planes Representation 9

(a) fr1/xyz (b) fr1/rpy

(d) fr3/long_office_household(c) fr2/xyz

Fig. 8: Reconstructed results of our method and the comparison between our evaluated trajectory and the ground

truth from the TUM RGB-D dataset.

ORB-SLAM2 ElasticFusion LPVO LSLAM Ours

fr3/str notex far 0.276 0.032 0.075 0.141 0.027fr3/str notex near 0.652 0.414 0.080 0.066 0.109fr3/str tex far 0.024 0.019 0.174 0.212 0.015fr3/str tex near 0.019 0.061 0.115 0.156 0.023fr3/cabinet × 1.059 0.520 0.291 0.052fr3/large cabinet 0.179 1.160 0.279 0.141 0.060

Table 1: Comparison of ATE RMSE (unit: m) on TUM RGB-D dataset [34]. “×” means camera tracking failure.

Kintinous CPA-SLAM ElasticFusion BundleFusion DVO PlaneMatch Ours

fr1/desk 0.037 0.018 0.020 0.016 0.021 0.015 0.014fr1/rpy 0.028 0.024 0.035 - 0.020 0.021 0.033fr1/xyz 0.017 0.011 0.013 0.012 0.011 0.011 0.011fr2/xyz 0.029 0.014 0.012 0.014 0.018 0.011 0.012fr3/office 0.030 0.025 0.026 0.028 0.035 0.018 0.017

Table 2: Comparison of ATE RMSE (unit: m) on TUM RGB-D dataset [34]. ‘-’ indicates that there is no perfor-

mance record or source code reported from this work.

4.2 ICL-NUIM Dataset

The ICL-NUIM dataset contains two scenes, a livin-

groom (lv) and an office (of ) rendered from virtual

scene. Each of them contains four scanning sequences

and each sequence is divided into two kinds. One is

completely noise-free (from kt0 to kt3 ), and the other

contains sensor noise in the depth images (from kt0n

to kt3n). The lv dataset provides ground truth cam-

era trajectory and reconstructed model for quantitative

analysis. Table 3 reports the RMSE evaluation of differ-

ent methods on lv noise datasets. We can see that our

method is comparable to other state-of-the-art meth-

ods. Because the depth image in the data set contains a

lot of noise, the ORB-SLAM2 [24] and KDP-SLAM [23]

are slightly better. This is because ORB-SLAM2 de-

pends on the ORB feature from the color images and

KDP-SLAM utilizes additional color consistency con-

straints, so the influence by depth image noises is alle-

viated. However, this makes their methods more com-

putationally expensive.

The proposed method is built on the foundation of

ElasticFusion [40]. Fig. 9 shows the reconstructed result

comparison between the proposed method and Elastic-

Fusion [40] to demonstrate the effectiveness of the pro-

posed plane tracking method. The heat map represents

the absolute distance between each vertex on the re-

constructed model and the corresponding vertex on the

ground truth. The histogram gives the distribution of

all vertices in different absolute distance errors and the

Page 10: Real-time Dense 3D Reconstruction and Camera Tracking via

10 Yanping Fu 1 et al.

ORB-SLAM2 ElasticFusion KDP-SLAM LPVO LSLAM Ours

lr-kt0n 0.010 0.038 0.009 0.015 0.012 0.027lr-kt1n 0.191 0.020 0.019 0.039 0.027 0.026lr-kt2n 0.030 0.032 0.029 0.034 0.053 0.028lr-kt3n 0.016 0.331 0.153 0.102 0.143 0.061

Table 3: ATE RMSE (unit: m) statistics on the ICL-NUIM dataset [14].

(a) ElasticFusion (b) Ours

Mean: 0.0238

Median: 0.0096

Std: 0.0315

Max: 0.172

Mean: 0.0499

Median: 0.0300

Std: 0.0564

Max: 0.3565

Mean: 0.0080

Median: 0.0066

Std: 0.0059

Max: 0.0386

Mean: 0.1101

Median: 0.0936

Std: 0.0822

Max: 0.398

lr-k

t1n

lr-k

t2n

Fig. 9: The reconstructed results comparison on lr-kt1n

and lr-kt2n by absolute distance.

vertex-to-model error statistics are listed beside. It can

be seen from the comparison results that our method

can better guarantee the structure of the scene and alle-

viate the error accumulation, as indicated in the vertex-

to-model absolute distance error statistics.

4.3 Author-collected Datasets

To demonstrate the effectiveness of our method in real

scenes, we scanned a corridor in the length of about 60

meters at a slightly faster speed and got two datasets.

One is acquired by scanning the whole corridor and

the other only the one side of the corridor, as shown in

Fig. 10. Reconstructing this corridor is very challenging

for current state-of-the-art methods since the corridor

only contains simple plane structures and lacks texture

and geometric details. The comparison between our ap-

proach and ElasticFusion is shown in Fig. 10. The cam-

era drifting usually occurs in ElasticFusion [40], which

causes unexpected deformations. In contrast, our re-

sults can well preserve the structure of the scene.

4.4 Ablation Studies

To investigate the value of each component of the pro-

posed method, we performed comparisons of the pro-

posed method in three cases including a) without plane

constraints, b) with only the frame-to-frame plane con-

straints, and c) with joint the frame-to-frame and the

frame-to-model plane constraints, as shown in Fig. 11.

For case a), the accumulative errors will be very seri-

ous in scenes where texture and geometric details are

lacking, as shown in Fig. 11(a). When we add frame-

to-frame plane constraints, the camera tracking accu-

racy will be improved. However, when there are fewer

planes in the scene, the camera tracking will slide easily

and produce cumulative errors, as shown in Fig. 11(b).

When the frame-to-model plane constraint is added,

the camera drifting can be effectively alleviated, and

the camera sliding can be prevented. The tracking ac-

curacy of the camera is greatly improved, as shown in

Fig. 11(c).

Limitations. The proposed method also suffers from

some limitations. Firstly, as it needs to build a GPM to

incrementally represent the reconstructed scene and do

global plane matching, so the total frame rate 25 FPS in

general, with the increase in plane quantity, the perfor-

mance may decline accordingly. Moreover, the proposed

method is based on planes for camera tracking. If it is

unable to detect planes in the scene, it would be degen-

erated into the standard ICP-base method. Thirdly, for

some slightly inclined planes, the method may mistak-

enly treat them as parallel planes and impose a negative

effect on the final reconstruction results.

5 Conclusions

In this paper, we have presented a holistic strategy

for plane matching by exploring the point cloud in the

plane parameter space (PPS), which can both facilitate

depth camera pose tracking and scene reconstruction.

We combine the geometric consistency and plane con-

sistency as constraints to improve the camera tracking.

In addition, we build an incremental global plane map

(GPM) as a proxy for the reconstructed scene and con-

duct a frame-to-model optimization to further reduce

accumulation errors. The experiments demonstrate the

outperformance of the proposed approach to current

state-of-the-art methods.

Page 11: Real-time Dense 3D Reconstruction and Camera Tracking via

Real-time Dense 3D Reconstruction and Camera Tracking via Embedded Planes Representation 11

(a)

(b)

Ela

stic

Fu

sion

Ela

stic

Fu

sion

Ou

rsO

urs

Fig. 10: Comparison results between our method and ElasticFusion [40] on the datasets acquired by ourselves

using Kinect v1.

(a) RMSE = 1.267 (b) RMSE = 0.525 (b) RMSE = 0.060Input Images

Fig. 11: Ablation studies of our method. (a) Without plane constraints. (b) Only the frame-to-frame plane con-

straints. (c) Joint the frame-to-frame and the frame-to-model plane constraints.

Acknowledgements This work was partly supported bythe National Key Research and Development Program ofChina (2017YFB1002600), the NSFC (No. 61672390, 61972298),the Key Technological Innovation Projects of Hubei Province(2018AAA062), Wuhan Science and Technology Plan Project(No. 2017010201010109).

Conflict of interest

The authors declare that they have no conflict of inter-

est.

References

1. Adrien Kaiser, A.Y., Boubekeur, T.: Proxy clouds for liveRGB-D stream processing and consolidation. In: ECCV(2018)

2. Agamennoni, G., Fontana, S., Siegwart, R.Y., Sorrenti,D.G.: Point clouds registration with probabilistic dataassociation. In: IROS (2016)

3. Besl, P.J., Mckay, N.D.: A method for registration of 3-Dshapes. In: Robotics, pp. 239–256 (1992)

4. Bylow, E., Sturm, J., Kerl, C., Kahl, F., Cremers, D.:Real-time camera tracking and 3D reconstruction usingsigned distance functions. In: Robotics: Science and Sys-tems (2013)

Page 12: Real-time Dense 3D Reconstruction and Camera Tracking via

12 Yanping Fu 1 et al.

5. Concha, A., Civera, J.: DPPTAM: Dense piecewise pla-nar tracking and mapping from a monocular sequence.In: IROS (2015)

6. Dai, A., Nießner, M., Zollofer, M., Izadi, S., Theobalt,C.: BundleFusion: Real-time globally consistent 3D re-construction using on-the-fly surface re-integration. ACMTransactions on Graphics (2017)

7. Dou, M., Guan, L., Frahm, J.M., Fuchs, H.: ExploringHigh-Level Plane Primitives for Indoor 3D Reconstruc-tion with a Hand-held RGB-D Camera. ACCV Work-shops (2013)

8. Dzitsiuk, M., Sturm, J., Maier, R., Ma, L., Cremers,D.: De-noising, stabilizing and completing 3D reconstruc-tions on-the-go using plane priors. In: ICRA (2017)

9. Feng, C., Taguchi, Y., Kamat, V.R.: Fast plane extractionin organized point clouds using agglomerative hierarchi-cal clustering. In: ICRA (2014)

10. Fernandez-Moral, E., Mayol-Cuevas, W., Arevalo, V.,Gonzalez-Jimenez, J.: Fast place recognition with plane-based maps. In: ICRA (2013)

11. Flint, A., Mei, C., Reid, I., Murray, D.: Growing seman-tically meaningful models for visual SLAM. In: CVPR(2010)

12. Fu, Y., Yan, Q., Yang, L., Liao, J., Xiao, C.: Texturemapping for 3D reconstruction with RGB-D sensor. In:CVPR (2018)

13. Halber, M., Funkhouser, T.: Fine-to-coarse global regis-tration of RGB-D scans. In: CVPR (2017)

14. Handa, A., Whelan, T., McDonald, J., Davison, A.J.: Abenchmark for RGB-D visual odometry, 3D reconstruc-tion and SLAM. In: ICRA (2014)

15. Jeon, J., Jung, Y., Kim, H., Lee, S.: Texture map gener-ation for 3D reconstructed scenes. The Visual Computer32(6-8), 955–965 (2016)

16. Kerl, C., Sturm, J., Cremers, D.: Robust odometry esti-mation for rgb-d cameras. In: ICRA (2013)

17. Kim, P., Coltin, B., Kim, H.J.: Linear RGB-D SLAM forplanar environments. In: ECCV (2018)

18. Kim, P., Coltin, B., Kim, H.J.: Low-drift visual odometryin structured environments by decoupling rotational andtranslational motion. In: ICRA (2018)

19. Le, P.H., Kosecka, J.: Dense piecewise planar RGB-DSLAM for indoor environments. In: IROS (2017)

20. Lee, J.K., Yea, J., Park, M.G., Yoon, K.J.: Joint layoutestimation and global multi-view registration for indoorreconstruction. In: ICCV (2017)

21. Li, R., Liu, Q., Gui, J., Gu, D., Hu, H.: A novel RGB-DSLAM algorithm based on points and plane-patches. In:CASE (2016)

22. Ma, L., Kerl, C., Stuckler, J., Cremers, D.: CPA-SLAM:Consistent plane-model alignment for direct RGB-DSLAM. In: ICRA (2016)

23. Ming, H., Westman, E., Zhang, G., Kaess, M.: Keyframe-based dense planar SLAM. In: ICRA (2017)

24. Mur-Artal, R., Tardos, J.D.: ORB-SLAM2: An open-source SLAM system for monocular, stereo, and RGB-Dcameras. IEEE Transactions on Robotics 33(5), 1255–1262 (2017)

25. Newcombe, R.A., Izadi, S., Hilliges, O., Molyneaux, D.,Kim, D., Davison, A.J., Kohi, P., Shotton, J., Hodges,S., Fitzgibbon, A.: KinectFusion: Real-time dense surfacemapping and tracking. In: ISMAR (2012)

26. Okada, K., Kagami, S., Inaba, M., Inoue, H.: Plane seg-ment finder: algorithm, implementation and applications.In: ICRA (2001)

27. Proenca, P.F., Gao, Y.: Probabilistic RGB-D odometrybased on points, lines and planes under depth uncer-tainty. Robotics & Autonomous Systems 104, 25–39(2018)

28. Pumarola, A., Vakhitov, A., Agudo, A., Sanfeliu, A.,Moreno-Noguer, F.: PL-SLAM: Real-time monocular vi-sual SLAM with points and lines. In: ICRA (2017)

29. Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: Orb:An efficient alternative to sift or surf. In: 2011 Inter-national conference on computer vision, pp. 2564–2571(2011)

30. Salas, M., Hussain, W., Concha, A., Montano, L., Civera,J., Montiel, J.M.M.: Layout aware visual tracking andmapping. In: IROS (2015)

31. Salas-Moreno, R.F., Glocken, B., Kelly, P.H.J., Davison,A.J.: Dense planar SLAM. In: ISMAR (2014)

32. Salas-Moreno, R.F., Newcombe, R.A., Strasdat, H.,Kelly, P.H.J., Davison, A.J.: SLAM++: Simultaneous lo-calisation and mapping at the level of objects. In: CVPR(2013)

33. Shi, Y., Xu, K., Nießner, M., Rusinkiewicz, S.,Funkhouser, T.: PlaneMatch: Patch coplanarity predic-tion for robust RGB-D reconstruction. In: ECCV (2018)

34. Sturm, J., Engelhard, N., Endres, F., Burgard, W., Cre-mers, D.: A benchmark for the evaluation of RGB-DSLAM systems. In: IROS (2012)

35. Sun, Q., Yuan, J., Zhang, X., Sun, F.: RGB-D SLAMin indoor environments with STING-Based plane featureextraction. IEEE/ASME Transactions on MechatronicsPP(99), 1–1 (2017)

36. Taguchi, Y., Jian, Y.D., Ramalingam, S., Feng, C.:SLAM using both points and planes for hand-held 3Dsensors. In: ISMAR (2012)

37. Taguchi, Y., Jian, Y.D., Ramalingam, S., Feng, C.: Point-plane SLAM for hand-held 3D sensors. In: ICRA (2013)

38. Wei, M., Yan, Q., Luo, F., Song, C., Xiao, C.: Joint bilat-eral propagation upsampling for unstructured multi-viewstereo. The Visual Computer 35(6-8), 797–809 (2019)

39. Whelan, T., Kaess, M., Fallon, M., Johannsson, H.,Leonard, J., Mcdonald, J.: Kintinuous: Spatially ex-tended kinectfusion. Robotics & Autonomous Systems69(C), 3–14 (2012)

40. Whelan, T., Leutenegger, S., Moreno, R.S., Glocker, B.,Davison, A.: ElasticFusion: Dense SLAM without a posegraph. In: Robotics: Science and Systems (2015)

41. Xiao, J., Owens, A., Torralba, A.: SUN3D: A databaseof big spaces reconstructed using SfM and object labels.In: ICCV (2013)

42. Yang, L., Yan, Q., Fu, Y., Xiao, C.: Surface reconstruc-tion via fusing sparse-sequence of depth images. IEEETransactions on Visualization and Computer Graphics24(2), 1190–1203 (2017)

43. Yang, L., Yan, Q., Xiao, C.: Shape-controllable geometrycompletion for point cloud models. The Visual Computer33(3), 385–398 (2017)

44. Yang, S., Scherer, S.: CubeSLAM: Monocular 3-D objectslam. IEEE Transactions on Robotics (2019)

45. Zhang, Y., Xu, W., Tong, Y., Zhou, K.: Online structureanalysis for real-time indoor scene reconstruction. ACMTransactions on Graphics 34(5), 159 (2015)

46. Zhou, Q.Y., Koltun, V.: Depth camera tracking with con-tour cues. In: CVPR (2015)

47. Zuo, X., Xie, X., Liu, Y., Huang, G.: Robust visual SLAMwith point and line features. In: IROS (2017)

Page 13: Real-time Dense 3D Reconstruction and Camera Tracking via

1

Yanping Fu received the BSc degree fromChangchun University in 2008, and the MSc de-gree from the Yanshan University in 2011. Heis currently pursuing the PhD degree with theSchool of Computer Science, Wuhan University,Wuhan, China. His research is focused on 3Dreconstruction, texture mapping and SLAM. Heis a student member of IEEE.

Qingan Yan received his PhD degree at Com-puter Science Department of Wuhan Universityin 2017. Before that, he got his BSc and MScdegree in Computer Science from SouthwestUniversity of Science and Technology, and HubeiUniversity for Nationalities, in 2012 and 2008respectively. He is currently working at JD.comSilicon Valley Research Center in California as acomputer vision scientist. His research interestsinclude 3D reconstruction, feature analysis andscene understanding.

Jie Liao received the BSc degree from WuhanUniversity in 2013, and the MSc degree fromHuazhong University of Science and Technologyin 2015. He is currently purchasing the PhDdegree at School of Computer Science, WuhanUniversity. His work mainly focuses on Multi-viewStereo, IBR and lightfield reconstruction.

Alix L.H. Chow received his Bachelor’s degreein computer engineerin from the Chinese Univer-sity of Hong Kong, Hong Kong, in 2001, and re-ceived his Master’s degree in computer scienceand engineering from the Chinese University ofHong Kong, Hong Kong, in 2003. He receivedhis Ph.D. degree in computer science at theUniversity of Southern California, Los Angeles.During his Ph.D. studies, he worked in the IBMT.J. Watson Research Center, New York, USA.He is now a director at the Xiaomi AI Department

of Xiaomi Technology Co. LTD. His interests include mobile Internet,social networks, computer networks and distributed systems.

Chunxia Xiao received his BSc and MSc de-grees from the Mathematics Department of Hu-nan Normal University in 1999 and 2002, re-spectively, and his PhD degree from the StateKey Lab of CAD & CG of Zhejiang University in2006. Currently, he is a professor in the Schoolof Computer Science, Wuhan University, China.From October 2006 to April 2007, he workedas a postdoc at the Department of ComputerScience and Engineering, Hong Kong Universityof Science and Technology, and during February

2012 to February 2013, he visited University of California-Davis for oneyear. His main interests include computer graphics, computer vision andmachine learning. He is a member of IEEE.