depth, you andthe world - intelligent...
TRANSCRIPT
![Page 1: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/1.jpg)
DEPTH, YOU, AND THE WORLDJAMIE SHOTTON
![Page 2: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/2.jpg)
Kinect Adventures
• Depth sensing camera
• Tracks 20 body joints in real time
• Recognises your face and voice
![Page 3: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/3.jpg)
![Page 4: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/4.jpg)
depth image(camera view)
What the Kinect Sees
![Page 5: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/5.jpg)
top view
side view
depth image(camera view)
What the Kinect Sees
![Page 6: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/6.jpg)
Structured light
x
yz
baseline
imagingplane
optic centreof camera
optic centreof IR laser
object atdepth d1
object atdepth d2
![Page 7: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/7.jpg)
Depth Makes Vision That Little Bit Easier
RGB
Only works well lit
Background clutter
Scale unknown
Color and texture variation
DEPTH
Works in low light
Background removal easier
Scale known
Uniform texture
![Page 8: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/8.jpg)
ROADMAP
Unifying principal:
Per-pixel regression drives per-image model fitting
THEVITRUVIAN MANIFOLD
[CVPR 2012]
SCENE COORDINATE REGRESSION
[CVPR 2013]
![Page 9: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/9.jpg)
THE VITRUVIAN MANIFOLD
Jonathan Taylor Jamie Shotton Toby Sharp Andrew Fitzgibbon
CVPR 2012
![Page 10: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/10.jpg)
Human Pose Estimation
In this work:
• Single frame at a time (no tracking)
• Kinect depth image as input (background removed)
Given some image input, recover the 3D human pose:
Joint positions and angles
![Page 11: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/11.jpg)
Why is Pose Estimation Hard?
![Page 12: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/12.jpg)
A Few Approaches
Regress directly to pose?e.g. [Gavrila ’00] [Agarwal & Triggs ’04]
Per-Pixel Body Part Classification[Shotton et al. ‘11]
Per-Pixel Joint Offset Regression[Girshick et al. ‘11]
Detect and assemble parts?e.g. [Felzenszwalb & Huttenlocher ’00] [Ramanan & Forsyth ’03] [Sigal et al. ’04]
Detect parts?e.g. [Bourdev & Malik ‘09] [Plagemann et al. ‘10] [Kalogerakis et al. ‘10]
![Page 13: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/13.jpg)
body joint hypotheses
front view side view top view
input depth image body parts
BPC Clustering
Background: Learning Body Parts for Kinect
Body Part Classification
[Shotton et al. CVPR 2011]
![Page 14: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/14.jpg)
Synthetic Training Data
Train invariance to:
Record mocap100,000s of poses
Retarget to varied body shapes
Render (depth, body parts) pairs
[Vicon]
![Page 15: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/15.jpg)
Depth Image Features
• Depth comparisons
– very fast to compute
inputdepthimage
xΔ
xΔ
xΔx
Δ
x
Δ
x
Δ
f(x; v) = 𝑑 x − 𝑑(x+ Δ)
offset depth
image coordinate
offset depth
featureresponse
Background pixelsd = large constant
scales inversely with depth
Δ =𝐯
𝑑 x
![Page 16: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/16.jpg)
Decision tree classification
image windowcentred at x
no
no yes
yes
P(c)P(c)
f(x; v1) > θ1
f(x; v2) > θ2
no yes
P(c)P(c)
f(x; v3) > θ3
![Page 17: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/17.jpg)
Decision Forests Book
• Theory – Tutorial & Reference
• Practice – Invited Chapters
• Software and Exercises
• Tricks of the Trade
![Page 18: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/18.jpg)
input depth inferred body parts
BPC
no tracking or smoothing[Shotton et al. CVPR 2011]
![Page 19: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/19.jpg)
body joint hypotheses
front view side view top view
input depth image body parts
BPC Clustering
• Mean shift mode detection on density
![Page 20: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/20.jpg)
front view top viewside view
input depth inferred body parts
inferred joint position hypothesesno tracking or smoothing[Shotton et al. CVPR 2011]
![Page 21: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/21.jpg)
Single frame at a time –> robust
Large training corpus -> invariant
Fast, parallel implementation
No kinematic skeleton
Limited handling of occlusion
Body Part Recognition in Kinect
![Page 22: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/22.jpg)
Explain the data directly with a mesh model[Ballan et al. ‘08] [Baak et al. ‘11]
• GOOD: Full skeleton• GOOD: Kinematic constraints enforced from the outset• GOOD: Able to cope with occlusion and cropping• BAD: Many local minima• BAD: Highly sensitive to initial guess• BAD: Potentially slow
A few approaches
![Page 23: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/23.jpg)
Texture is mapped across body shapes and poses
From Body Parts to Dense Correspondences
increasing number of parts
classification regression
The “Vitruvian Manifold”Body Parts
![Page 24: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/24.jpg)
The “Vitruvian Manifold” Embedding in 3D
v = 1
v = -1
u = -1 u = 1
w = -1 [L. Da Vinci, 1487]
w = 1
Geodesic surface distances approximated by Euclidean distance
![Page 25: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/25.jpg)
Overview
inferred dense correspondences
testimages
regression forest
…
energyfunction
op
tim
izat
ion
of
mo
de
l par
amet
ers 𝜃
final optimized poses
front right top
trainingimages
![Page 26: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/26.jpg)
𝑅l_arm(𝜃)
• Mesh is attached to a hierarchical skeleton
• Each limb 𝑙 has a transformation matrix 𝑇𝑙 𝜃relating its local coordinate system to the world:
• 𝑅global(𝜃) encodes a global scaling, translation and rotation
• 𝑅𝑙(𝜃) encodes a rotation and fixed translation relative to its parent
• 13 parameterized joints using quaternions to represent unconstrained rotations
• This gives 𝜃 a total of 1 + 3 + 4 + 4 ∗ 13 = 60 degrees of freedom
𝑅global(𝜃)𝑇root 𝜃 = 𝑅global(𝜃)
𝑇𝑙 𝜃 = 𝑇parent 𝑙 𝜃 𝑅𝑙(𝜃)
Human Skeleton Model
![Page 27: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/27.jpg)
Linear Blend Skinning
𝑀 𝑢; 𝜃 =
𝑘=1
𝐾
𝛼𝑘𝑇𝑙𝑘 𝜃 𝑇𝑙𝑘−1 𝜃0 𝑝
Each vertex 𝑢
• has position 𝑝 in base pose 𝜃0• is attached to K limbs 𝑙𝑘 𝑘=1
𝐾 with weights 𝛼𝑘 𝑘=1𝐾
In a new pose 𝜃, the skinned position 𝑢 of is:
Mesh in base pose 𝜃0
position in limb lk’scoordinate system
position in world coordinate system
![Page 28: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/28.jpg)
min𝜃min𝑢1…𝑢𝑛
𝑖
𝑑(𝑥𝑖 , 𝑀 𝑢𝑖; 𝜃 )
Test Time Model Fitting• Assume each observation 𝑥𝑖 is generated by a point on our model 𝑢𝑖
𝑥𝑖 = 𝑀 𝑢𝑖; 𝜃
What pose isthe model in?
Observed 3D Point Predicted 3D Point• Optimize:
𝐶𝑜𝑟𝑟𝑒𝑠𝑝𝑜𝑛𝑑𝑖𝑛𝑔𝑀𝑜𝑑𝑒𝑙 𝑃𝑜𝑖𝑛𝑡𝑠: 𝑢1, … 𝑢𝑛
𝑂𝑏𝑠𝑒𝑟𝑣𝑒𝑑 𝑃𝑜𝑖𝑛𝑡𝑠: 𝑥1, … , 𝑥𝑛
𝑥𝑖
Optimization Strategies
• Alternating between pose 𝜃 and correspondences 𝑢1, … 𝑢𝑛 Iterative Closest Point (ICP)
• Traditionally, start from initial 𝜃• from tracking or manual initialization
• Instead, we start from initial 𝑢1, … 𝑢𝑛• inferred discriminatively
• “One-shot” pose estimation• can we achieve a good result without iterating?
Note: simplified energy - more details to come
![Page 29: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/29.jpg)
One-Shot Pose Estimation: An Early Result
Can we achieve a good result without iterating?
ground truthcorrespondences
(legacy coloring scheme)
testdepth image
convergencevisualization
![Page 30: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/30.jpg)
Discriminative Model: Predicting Correspondences
inputimages
inferred dense correspondences
regression forest
…
trainingimages
![Page 31: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/31.jpg)
Learning the Correspondences
• How to learn the mapping from depth pixels to correspondences?𝑥𝑖
• Render synthetic training set:
rendercharacters
mocap
• Train regression forest
![Page 32: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/32.jpg)
mean shift
mode detection
Each pixel-correspondence pair descends to a leaf in the tree
Learning a Regression Model at the Leaf Nodes
![Page 33: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/33.jpg)
Inferring Correspondences
![Page 34: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/34.jpg)
infer correspondences 𝑈
optimize parameters
min𝜃𝐸(𝜃, 𝑈)
![Page 35: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/35.jpg)
Full Energy
• Term Evis approximates hidden surface removal and uses robust error
• Gaussian prior term Eprior
• Self-intersection prior term Eint approximates interior volume
𝐸 𝜃, 𝑈 =𝜆vis𝐸vis 𝜃, 𝑈 + 𝜆prior𝐸prior 𝜃 + 𝜆int𝐸int 𝜃
Energy is robust to noisy correspondences
• Correspondences far from their image points are “ignored”
• Correspondences facing away from the camera are “ignored”• avoids model getting stuck in front of the image pixels
𝜌(𝑒)
𝑒 = 0
𝑐𝑠(𝜃0) 𝑐𝑡(𝜃0)
![Page 36: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/36.jpg)
![Page 37: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/37.jpg)
![Page 38: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/38.jpg)
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
0 0.05 0.1 0.15 0.2 0.25 0.3
Wo
rst-
case
acc
ura
cy(%
fra
me
s w
ith
all
join
ts w
ith
in d
ist.
D)
D: max allowed distance to GT (m)
Our algorithm
Given GT u
Optimal θ
Hard Metric:“Perfect” Frame Accuracy
𝜃
𝑈Results on 5000synthetic images
0.09m 0.11m 0.17m 0.21m 0.45mD:
![Page 39: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/39.jpg)
Comparison
0%
10%
20%
30%
40%
50%
60%
70%
0 0.05 0.1 0.15 0.2 0.25 0.3
Wo
rst
case
acc
ura
cy(%
fra
me
s w
ith
all
join
ts w
ith
in d
ist.
D)
D: max allowed distance to GT (m)
Our algorithm
[Shotton et al. '11] (top hypothesis)
[Girshick et al. 11] (top hypothesis)
[Shotton et al. '11] (best of top 5)
[Girshick et al. '11] (best of top 5)
Require an oracle
Achievable algorithms
Results on 5000 synthetic images
Vitruvian Manifold
![Page 40: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/40.jpg)
Sequence Result
Each frame fit independently: no temporal information used
![Page 41: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/41.jpg)
![Page 42: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/42.jpg)
Vitruvian Manifold: Summary
• Predict per-pixel image-to-model correspondences
– train invariance to body shape, size, and pose
• “One-shot” pose estimation
– fast, accurate
– auto-initializes using correspondences
![Page 43: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/43.jpg)
SCENE COORDINATE REGRESSION FORESTS
FOR CAMERA RELOCALIZATION IN RGB-D IMAGES
JAMIE SHOTTON BEN GLOCKER CHRISTOPHER ZACH SHAHRAM IZADI ANTONIO CRIMINISI ANDREW FITZGIBBON
[CVPR 2013]
![Page 44: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/44.jpg)
Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David Molyneaux, Pushmeet
Kohli, Steve Hodges, Andrew Davison, Andrew Fitzgibbon. SIGGRAPH, UIST and ISMAR 2011.
KIN
ECTFusion
![Page 45: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/45.jpg)
Work by: Chen, Bautembach, Izadi. To appear at SIGGRAPH 2013.
KIN
ECTFusion
![Page 46: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/46.jpg)
RELOCALIZATION
Revisit a known scene
Observe a single frame of (RGB, Depth)
Infer the 6D camera pose, 𝐻(camera to scene transformation)
Input
RGB
Input
Depth
![Page 47: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/47.jpg)
TYPICAL APPROACHES TO CAMERA LOCALIZATION
Tracking – alignment relative to previous frame e.g. [Besl & MacKay ‘92]
Key point detection → local descriptors → matching → geometric verification
e.g. [Holzer et al. ‘12], [Winder & Brown ‘07], [Lepetit & Fua ‘06], [Irschara et al. ‘09]
Whole key-frame matching e.g. [Klein & Murray 2008] [Gee & Mayol-Cuevas 2012]
Epitomic location recognition [Ni et al. 2009]approximate
precise
![Page 48: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/48.jpg)
PROBLEMS IN REAL WORLD CAMERA LOCALIZATION
The real world is less exciting than vision researchers might like
sparse interest points can fail
The real world is big
?
!
![Page 49: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/49.jpg)
SCENE COORDINATE REGRESSION
Offline approach to relocalization
observe a scene
train a regression forest
revisit the scene
Aim for really precise localization
e.g. suitable for AR overlays
from a single frame
without an explicit 3D model
p
ℳ𝑙1 𝐩
![Page 50: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/50.jpg)
SCENE COORDINATE REGRESSION
Let each pixel predict direct correspondence
to 3D point in scene coordinates:
AB C
Input RGB Input Depth Desired Correspondences
A
B
C Scene coordinate XYZ RGB color space
3D model from KinectFusion
(only used for visualization)
![Page 51: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/51.jpg)
SCENE COORDINATE REGRESSION (SCORE) FORESTS
RGB
Depth
…
tree 1p
ℳ𝑙1 𝐩
ptree T
ℳ𝑙𝑇 𝐩
Depth & RGB
features
SCoRe Forest
𝛿1𝐷(𝐩)
𝛿2𝐷(𝐩)
Leaf Predictions ℳ𝑙 ⊂ ℝ3
𝐩
Forest Predictions ℳ 𝐩 =
𝑡
ℳ𝑙𝑡(𝐩)
![Page 52: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/52.jpg)
TRAINING A SCORE FOREST
RGB-D frames with known camera poses 𝐻
Generate 3D pixel labels automatically:
𝐦 = 𝐻𝐱
Training Data
RGB Depth
𝐱Labels
𝐦
Learning (standard)
Greedily train tree
Reduction in spatial variance objective:
Regression, not classification
Mean shift to summarize distribution
at leaf 𝑙 into small set ℳ𝒍 ⊂ ℝ3
![Page 53: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/53.jpg)
SCORE FORESTS: PROPERTIES
A single-step alternative to the traditional pipeline
interest point detection description matching
In theory, only three 3D3D correspondences needed to infer 6D camera pose
Kabsch algorithm (a.k.a. orthogonal Procrustes alignment)
Thus, only need to apply forest at three test image pixels
any three pixels will do
sparseness gives efficiency
in practice, noise in prediction means we use more than three pixels
![Page 54: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/54.jpg)
ROBUST CAMERA POSE OPTIMIZATION
pixel index
camera pose
robust
error function
correspondences
predicted
by forest
at pixel 𝑖
Energy Function Optimization
Preemptive RANSAC
[Nistér ICCV 2003]
With pose refinement
[Chum et al. DAGM 2003]
efficient updates to means & covariancesused by Kabsch SVD
Only a small subset of pixels used
![Page 55: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/55.jpg)
INLYING FOREST PREDICTIONS
Inferred camera poseTest images Inliers for six hypotheses
![Page 56: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/56.jpg)
PREEMPTIVE RANSAC OPTIMIZATION
![Page 57: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/57.jpg)
THE 7SCENES DATASET
Heads
Pumpkin
RedKitchen
Stairs
Dataset to be released at CVPR
![Page 58: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/58.jpg)
BASELINES FOR COMPARISON
ORB matching
[Rublee et al. ICCV 2011]
FAST detector
Rotation aware BRIEF descriptor
Hashing for matching
Geometric verification
RANSAC & perspective 3 point
Final refinement given inliers
Sparse Key-Points (RGB only) Tiny-Image Key-Frames (RGB & Depth)
Downsample to 40x30 pixels
Blur
Normalized Euclidean distance
Brute-force search
Interpolation of 100 closest poses
[Klein & Murray ECCV 2008]
[Gee & Mayol-Cuevas BMVC 2012]
![Page 59: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/59.jpg)
QUANTITATIVE COMPARISON
Choice of different image features
Proportion of test frames with < 0.05m translational error and < 5○ angular error
Metric:
Results:
![Page 60: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/60.jpg)
QUALITATIVE COMPARISON
ground truth DA-RGB SCoRe forest sparse baseline closest training pose
![Page 61: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/61.jpg)
QUALITATIVE COMPARISON
ground truth DA-RGB SCoRe forest sparse baseline closest training pose
![Page 62: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/62.jpg)
TRACK VISUALIZATION VIDEOS
ground truth
DA-RGB SCoRe forest
RGB sparse baseline
single frame at a time – no tracking
![Page 63: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/63.jpg)
AR VISUALIZATION
RGB input
+ AR overlay
depth input
+ AR overlay
rendering of model
from inferred pose
single frame at a time – no tracking
![Page 64: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/64.jpg)
SIMPLE ROBUST TRACKING
Add a single extra hypothesis to optimization: the result from previous frame
Single frame
![Page 65: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/65.jpg)
AR VISUALIZATION WITH TRACKING
RGB input
+ AR overlay
depth input
+ AR overlay
rendering of model
from inferred pose
simple robust frame-to-frame tracking enabled
![Page 66: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/66.jpg)
MODEL-BASED REFINEMENT
Model-based refinement
requires 3D model of scene
run ICP from our inferred pose between observed image and model
0%
20%
40%
60%
80%
100%
Chess Fire Heads Office Pumpkin RedKitchen StairsPro
po
rtio
n o
f fr
ames
co
rrec
t
Baseline: Tiny-Image Depth
Baseline: Tiny-Image RGB
Baseline: Tiny-Image RGB-D
Baseline: Sparse RGB
Ours: Depth
Ours: DA-RGB
Ours: DA-RGB + D
[Besl & McKay PAMI 1992]
![Page 67: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/67.jpg)
AR VISUALIZATION WITH TRACKING AND REFINEMENT
RGB input
+ AR overlay
depth input
+ AR overlay
rendering of model
from inferred pose
simple robust frame-to-frame tracking and ICP-based model refinement enabled
![Page 68: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/68.jpg)
Fire Scene
SCoRe Forest
(single frame at a time)
SCoRe Forest
+
simple robust
frame-to-frame tracking
SCoRe Forest
+
simple robust
frame-to-frame tracking
+
ICP refinement to 3D model
RGB input
+ AR overlay
depth input
+ AR overlay
rendering of model
from inferred pose
![Page 69: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/69.jpg)
Pumpkin Scene
RGB input
+ AR overlay
depth input
+ AR overlay
rendering of model
from inferred pose
SCoRe Forest
(single frame at a time)
SCoRe Forest
+
simple robust
frame-to-frame tracking
SCoRe Forest
+
simple robust
frame-to-frame tracking
+
ICP refinement to 3D model
![Page 70: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/70.jpg)
SCENE RECOGNITION
Ch
ess
Fire
Hea
ds
Off
ice
Pu
mp
kin
Red
Kit
chen
Stai
rs
Chess 100.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%
Fire 2.0% 98.0% 0.0% 0.0% 0.0% 0.0% 0.0%
Heads 0.0% 0.0% 100.0% 0.0% 0.0% 0.0% 0.0%
Office 0.0% 0.5% 4.0% 95.5% 0.0% 0.0% 0.0%
Pumpkin 0.0% 0.0% 0.0% 0.0% 100.0% 0.0% 0.0%
RedKitchen 2.8% 1.2% 3.6% 0.0% 0.0% 92.4% 0.0%
Stairs 0.0% 0.0% 10.0% 0.0% 0.0% 0.0% 90.0%
Train one SCoRe Forest per scene
Test frame against all scenes
Scene with lowest energy wins
Single frame only
![Page 71: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/71.jpg)
SCENE COORDINATE REGRESSION - SUMMARY
Scene coordinate regression forests
allow accurate relocalization without explicit 3D model
provide a single-step alternative to detection/description/matching pipeline
can be applied at any valid pixel, not just at interest points
Tracking-by-detection is approaching temporal tracking accuracy
![Page 72: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/72.jpg)
Wrap Up
• New depth sensors
• Machine learning + big (synthetic) data
• Per-pixel regression and per-image model fitting
![Page 73: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/73.jpg)
Coming November…
![Page 74: DEPTH, YOU ANDTHE WORLD - Intelligent Sensingcis.eecs.qmul.ac.uk/2013SummerSchool/JamieShotton... · Joint work with Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David](https://reader034.vdocuments.us/reader034/viewer/2022050516/5fa0165cfcf2db16ee53c14e/html5/thumbnails/74.jpg)
Thank you!
With thanks to:Andrew Fitzgibbon, Jon Taylor, Ross Girshick, Mat Cook, Andrew Blake, Toby Sharp, Pushmeet Kohli, Ollie Williams, Sebastian Nowozin, Antonio Criminisi, Mihai Budiu, Duncan Robertson, John Winn, Shahram Izadi
The whole Kinect team, especially: Alex Kipman, Mark Finocchio,Ryan Geiss, Richard Moore, Robert Craig, Momin Al-Ghosien,Matt Bronder, Craig Peeper