efficient computation of robust low-rank matrix approximations in the presence of missing data using...
TRANSCRIPT
Efficient computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data
using the L1 Norm
Anders Eriksson and Anton van den Hengel
CVPR 2010
• Usual low rank approximation using L2 norm– SVD.
• Robust low rank approximation using L2 norm- Wiberg Algorithm.
• “Robust” low rank approximation in the presence of:– missing data– Outliers– L1 norm– Generalization of Wiberg Algorithm.
Y = U V
MXN MXRRXN
Problem
nr
rm
nm
nm
RV
RU
RY
RW
ˆ
W is the indicator matrix, wij = 1 if yij is known, else 0.
Wiberg Algorithm
22
121
121
2
2
),(
,],...,,[
,],...,,[
)(),(
WyvWGWyuWFvu
Rvvvvv
Ruuuuu
UVYWVU
uv
ri
TTn
TT
ri
TTm
TT
W matrix indicates the presence/absence of elements
From: “On the Wiberg algorithm for matrix factorization in the presence of missing components”, Okatani et al, IJCV 2006,
Alternating Least Squares
• To find the minimum of φ, find derivatives
• Considering the two equations independently.
• Starting with some initial estimates u0 and v0, update u from v and v from u.
• Converges very slowly, specially for missing components and strong noise.
From: “On the Wiberg algorithm for matrix factorization in the presence of missing components”, Okatani et al, IJCV 2006,
Back to Wiberg
• In non-linear least squares problems with multiple parameters, when assuming part of the parameters to be fixed, minimization of the least squares with respect to the rest of the parameters becomes a simple problem, e.g., a linear problem. So closed form solutions may be found.
• Wiberg applied it to this problem of factorization of matrix with missing components.
From: “On the Wiberg algorithm for matrix factorization in the presence of missing components”, Okatani et al, IJCV 2006,
Back to Wiberg
• For a fixed u, the L2 norm becomes a linear, least squares minimization problem in v.– Compute optimal v*(u)
• Apply Gauss-Newton method to the above non-linear least squares problem to find optimal u*.
• Easy to compute derivative because of L2 norm
min|)(|
)()(2
1)(
2
vdu
dgug
ugugu T
From: “On the Wiberg algorithm for matrix factorization in the presence of missing components”, Okatani et al, IJCV 2006,
Linear Programming and Definitions
L1-Wiberg Algorithm
Minimization problem in terms of L1 norm
Minimization problem in terms of v and u independently
Substituting v* into u
Comparing to L2-Wiberg
• V*(U) is not easily differentiable• The minimization function (u,v*) is not a least squares
minimization problem, so Gauss-Newton can’t be applied directly.
• Idea: Let V*(U) denote the optimal basic solution. V*(U) is differentiable assuming problem is feasible, as per Fundamental Theorem of differentiability of linear programs.
Jacobian for the G-N :: derivative of solution to a linear prog. problem
≈
Add an additional term to the function and minimize the value of the term ?
AvB
*
1* )( Bv TB
u
A
A
uv
u
uv
*)()( **
Results
• Tested on synthetic data.– Randomly created measurement matrices Y
drawn from a uniform distribution [-1,1].– 20% missing, 10% noise [-5,5].
• Real data– Dinosaur sequence from oxford-vgg.
Structure from motion
• Projections of 319 points tracked over 36 views. Addition of noise to 10% points.
• Full 3d reconstruction ~ low rank matrix approximation.
• Above-residual for the visible points. In L2 norm, reconstruction error is evenly distributed among all elements of residual. In L1 norm, error concentrated on few elements.