bundle adjustment in the largegrail.cs.washington.edu/projects/bal/poster.pdfbundle adjustment in...
TRANSCRIPT
Bundle Adjustment in the LargeSameer Agarwal
GoogleNoah Snavely
Cornell UniversitySteven M. Seitz
Google & University of WashingtonRichard Szeliski
Microsoft Research
Until convergence
µ ← µ/2
µ ← 2 ∗ µ
else
min∆xk
‖J(xk)∆xk + f(xk)‖2 + µ‖D(xk)∆xk‖2
xk+1 ← xk +∆xk
xk+1 ← xk
k ← k + 1
if ‖f(xk +∆xk)‖2 < ‖J(xk)∆xk + f(xk)‖2
Exact Step Levenberg Marquardt
[J!(x)J(x) + µD(x)!D(x)
]∆x = −J!(x)f(x)
Hµ∆x = −g
[B EE! C
] [∆y∆z
]=
[vw
]
[B − EC−1E"]∆y = v − EC−1w
∆z = C−1(w − E"∆y)
Normal Equations
Hessian Approximation
CamerasPoints
Schur Complement
Calculating the LM Step
1. Expensive to compute and store2. Extremely expensive to factorize
S = B − EC−1E"
explicit-direct explicit-sparse explicit-jacobi implicit-jacobi implicit-ssor normal-jacobi
τ=
0.01
0
0.2
0.4
0.6
0.8
1
τ=
0.001
0
0.2
0.4
0.6
0.8
1
τ=
0.0001
101
102
103
104
0
0.2
0.4
0.6
0.8
1
101
102
103
104
101
102
103
104
101
102
103
104
101
102
103
104
101
102
103
104
1
Experiments
Code and data on our website http://grail.cs.washington.edu/projects/bal
Use Preconditioned Conjugate Gradients.
‖Hµ(xk)∆xk + g(xk)‖ ≤ ηk‖g(xk)‖
Inexact Step Levenberg Marquardt
min∆xk
‖J(xk)∆xk + f(xk)‖2 + µ‖D(xk)∆xk‖2Replace the exact solution to
Forcing sequence, controls the quality of LM step
with an approximate solution satisfying
Calculating the Inexact step
Hµ∆x = −g
But which linear system ?
[B − EC−1E"]∆y = v − EC−1w
Much smaller linear system. Expensive to compute and store, but
Large linear system. Easy to evaluate matrix-vector products. Or,
x2 = E!∆y, x3 = C−1x2, x4 = Ex3
x5 = B∆y
[B − EC−1E!]∆y = x5 − x4
i.e., We can run PCG implicitly on the reduced camera matrix at the same cost as the Normal equations!
M(P ) =
[P E0 C
] [P−1 00 C−1
] [PE" C
]
Lemma (Saad 2003): CG with on the reduced camera matrix with a preconditioner P is the same as CG on the normal equations with the SSOR preconditioner:
Run time
Solution Accuracy
Current Bundle adjustment algorithms do not scale beyond 1-2K images.
Our Algorithm1. 14k images, 4.5M points in less than an hour.2. Inexact step Levenberg Marquardt algorithm.3. Predictable and minimal memory usage.4. No need for high performance BLAS/LAPACK.4. Easily parallelizable (shared and distributed memory)5. Simple preconditioners give state of the art performance.
Dataset
1. Ladybug: Images captured from a vehicle driving down a street.2. Skeletal: Sparse incremental reconstruction of Flickr images.3. Final: Reconstructions from the final stage of skeletal sets algorithm on Flickr images.10
110
210
310
40
0.2
0.4
0.6
0.8
1
Images
Sch
ur
Com
ple
ment
Spars
ity
LadybugSkeletalFinal
Small Large
Spar
seDe
nse
100
101
102
103
104
105
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Time (seconds)
Rela
tive d
ecr
ease
in R
MS
err
or
normal−jacobiexplicit−jacobiimplicit−jacobiimplicit−ssor
100
101
102
103
104
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Time (seconds)
Rela
tive d
ecr
ease
in R
MS
err
or
explicit−directexplicit−sparsenormal−jacobiexplicit−jacobiimplicit−jacobiimplicit−ssor
100
101
102
103
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Time (seconds)
Rela
tive d
ecr
ease
in R
MS
err
or
explicit−directexplicit−sparsenormal−jacobiexplicit−jacobiimplicit−jacobiimplicit−ssor
100
101
102
103
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Time (seconds)
Rela
tive d
ecr
ease
in R
MS
err
or
explicit−directexplicit−sparsenormal−jacobiexplicit−jacobiimplicit−jacobiimplicit−ssor
Venice Skeletal (1936 images) Venice Final (13696 images) Trafalgar Skeletal (257 images) Ladybug (1197 images)
1. The first few steps don't need to be very accurate, initial decrease can be quite fast.2. The quality of preconditioner decides performance in later iterations.3. Simpler preconditioners give crude solutions fast and then stall.4. Small problems: SBA/direct-factorization.5. Large problems: PCG with SSOR preconditioning.