lecture 4 reconstruction from two views

67
Lecture 4 Reconstruction from two views Joaquim Salvi Universitat de Girona Visual Perception

Upload: joaquim-salvi

Post on 18-Feb-2017

158 views

Category:

Education


0 download

TRANSCRIPT

Page 2: Lecture 4   Reconstruction from Two Views

2

Lecture 4: Reconstruction from two views

Contents

4. Reconstruction from two views

4.1 Shape from X

4.2 Triangulation principle

4.3 Epipolar geometry – Modelling

4.4 Epipolar geometry – Calibration

4.5 Constraints in stereo vision

4.6 Experimental comparison of methods

4.7 Sample: Mobile robot performing 3D mapping

Page 3: Lecture 4   Reconstruction from Two Views

3

Lecture 4: Reconstruction from two views

Contents

4. Reconstruction from two views

4.1 Shape from X

4.2 Triangulation principle

4.3 Epipolar geometry – Modelling

4.4 Epipolar geometry – Calibration

4.5 Constraints in stereo vision

4.6 Experimental comparison of methods

4.7 Sample: Mobile robot performing 3D mapping

Page 4: Lecture 4   Reconstruction from Two Views

4

Lecture 4: Reconstruction from two views

4.1 Shape from X

Techniques based on:

– Modifying the intrinsic camera parameters

i.e. Depth from Focus/Defocus and Depth from Zooming

– Considering an additional source of light onto the scene

i.e. Shape from Structured Light and Shape from Photometric

Stereo

– Considering additional surface information

i.e. Shape from Shading, Shape from Texture and Shape from

Geometric Constraints

– Multiple views

i.e. Shape from Stereo and Shape from Motion

Shape from Focus/Defocus

Page 5: Lecture 4   Reconstruction from Two Views

5

Lecture 4: Reconstruction from two views

4.1 Shape from X

Techniques based on:

– Modifying the intrinsic camera parameters

i.e. Depth from Focus/Defocus and Depth from Zooming

– Considering an additional source of light onto the scene

i.e. Shape from Structured Light and Shape from Photometric

Stereo

– Considering additional surface information

i.e. Shape from Shading, Shape from Texture and Shape from

Geometric Constraints

– Multiple views

i.e. Shape from Stereo and Shape from Motion

Shape from Structured Light

Page 6: Lecture 4   Reconstruction from Two Views

6

Lecture 4: Reconstruction from two views

4.1 Shape from X

Techniques based on:

– Modifying the intrinsic camera parameters

i.e. Depth from Focus/Defocus and Depth from Zooming

– Considering an additional source of light onto the scene

i.e. Shape from Structured Light and Shape from Photometric

Stereo

– Considering additional surface information

i.e. Shape from Shading, Shape from Texture and Shape from

Geometric Constraints

– Multiple views

i.e. Shape from Stereo and Shape from Motion

Shape from Shading

Page 7: Lecture 4   Reconstruction from Two Views

7

Lecture 4: Reconstruction from two views

4.1 Shape from X

Techniques based on:

– Modifying the intrinsic camera parameters

i.e. Depth from Focus/Defocus and Depth from Zooming

– Considering an additional source of light onto the scene

i.e. Shape from Structured Light and Shape from Photometric

Stereo

– Considering additional surface information

i.e. Shape from Shading, Shape from Texture and Shape from

Geometric Constraints

– Multiple views

i.e. Shape from Stereo and Shape from Motion

Shape from Stereo

Page 8: Lecture 4   Reconstruction from Two Views

8

Lecture 4: Reconstruction from two views

Contents

4. Reconstruction from two views

4.1 Shape from X

4.2 Triangulation principle

4.3 Epipolar geometry – Modelling

4.4 Epipolar geometry – Calibration

4.5 Constraints in stereo vision

4.6 Experimental comparison of methods

4.7 Sample: Mobile robot performing 3D mapping

Page 9: Lecture 4   Reconstruction from Two Views

9

Lecture 4: Reconstruction from two views

Contents

4. Reconstruction from two views

4.1 Shape from X

4.2 Triangulation principle

4.3 Epipolar geometry – Modelling

4.4 Epipolar geometry – Calibration

4.5 Constraints in stereo vision

4.6 Experimental comparison of methods

4.7 Sample: Mobile robot performing 3D mapping

Page 10: Lecture 4   Reconstruction from Two Views

10

Lecture 4: Reconstruction from two views

4.2 Triangulation principle

World

coordinate

system

WZ

WYWX

WO W

Camera’

coordinate

system

'CY

'CX

'CZ

'CO 'C

'IY

'IX'IO

'I

'RY

'RX 'R

'RO

Camera

coordinate

system

CY

CX CZ

CO C

IY

IXIO I

RY

RX

RO

R

Page 11: Lecture 4   Reconstruction from Two Views

11

Lecture 4: Reconstruction from two views

4.2 Triangulation principle

uP

'uP

Camera

coordinate

system

CY

CX CZ

CO C

Camera’

coordinate

system

'CY

'CX

'CZ

'CO 'C

World

coordinate

system

WZ

WYWX

WO W

IY

IXIO I

RY

RXRO

'IY

'IX'IO

'I

'RY

'RX

'R

'RO

RdP

'dP

wP

Pw = Pu + m u

Pw = P’u + m’ v

Steps:1 - Pu + m u = P’u + m’ v

2 - Expand to x,y,z

3 - Get m and m’

4 – Compute Pw

v

u

Page 12: Lecture 4   Reconstruction from Two Views

12

Lecture 4: Reconstruction from two views

4.2 Triangulation principle

rP

sP

u

v

ˆwP

wP3D Object Point

Reconstructed

Point

ˆwP

uP

'uP

Camera

coordinate

system

CY

CX CZ

CO C

Camera’

coordinate

system

'CY

'CX

'CZ

'CO 'C

World

coordinate

system

WZ

WYWX

WO W

IY

IXIO I

RY

RXRO

'IY

'IX'IO

'I

'RY

'RX

'R

'RO

RdP

'dP

wP

Pw = Pu + m u

Pw = P’u + m’ v

Page 13: Lecture 4   Reconstruction from Two Views

13

Lecture 4: Reconstruction from two viewsLecture 4: Reconstruction from two views

WOc2

WOc1

WP2D2

WP2D1u

v

pq

WP3D

http://astronomy.swin.edu.au/~pbourke/geometry/lineline3d/

Pa = P1 + mua (P2 - P1)

Pb = P3 + mub (P4 - P3)

Two different ways:

Minimize the distance between points:

Min || Pb - Pa ||2

Min || P1 + mua (P2 - P1) - P3 - mub (P4 - P3) ||2

Finding mua and mub once expanded to (x,y and z)

Compute the dot product between vectors:

(Pa - Pb)T (P2 - P1) = 0

(Pa - Pb)T (P4 - P3) = 0

Because they are perpendicular.

Finding mua and mub once expanded to Pa, Pb

and (x,y and z)

4.2 Triangulation principle

Pb

Pa

Page 14: Lecture 4   Reconstruction from Two Views

14

Lecture 4: Reconstruction from two viewsLecture 4: Reconstruction from two views

In practice we can use Least-Squares:

134333231

24232221

14131211

1

11

11

Z

Y

X

AAAA

AAAA

AAAA

s

vs

us

134333231

24232221

14131211

2

22

22

Z

Y

X

BBBB

BBBB

BBBB

s

vs

us

CQX

QXC

1

Z

Y

X

BvBBvBBvB

BuBBuBBuB

AvAAvAAvA

AuAAuAAuA

vBB

uBB

vAA

uAA

232332223221231

132331223211231

231332213221131

131331213211131

23424

23414

13424

13414

4.2 Triangulation principle

Add additional rows if we have additiional views of the same point

Page 15: Lecture 4   Reconstruction from Two Views

15

Lecture 4: Reconstruction from two views

Contents

4. Reconstruction from two views

4.1 Shape from X

4.2 Triangulation principle

4.3 Epipolar geometry – Modelling

4.4 Epipolar geometry – Calibration

4.5 Constraints in stereo vision

4.6 Experimental comparison of methods

4.7 Sample: Mobile robot performing 3D mapping

Page 16: Lecture 4   Reconstruction from Two Views

16

Lecture 4: Reconstruction from two views

Contents

4. Reconstruction from two views

4.1 Shape from X

4.2 Triangulation principle

4.3 Epipolar geometry – Modelling

4.4 Epipolar geometry – Calibration

4.5 Constraints in stereo vision

4.6 Experimental comparison of methods

4.7 Sample: Mobile robot performing 3D mapping

Page 17: Lecture 4   Reconstruction from Two Views

17

Lecture 4: Reconstruction from two viewsLecture 4: Reconstruction from two views

I’I

OC’

OC

m

m’

e e’

M

OI

OI’

OW

CKC’

4.3 Epipolar Geometry – Modelling

• Focal points, epipoles and epipolar lines

• e is defined by OC’ in {I}, e’ is defined by OC in {I’}

• m defines an epipolar line in {I’}; m’ defines an epipolar line in {I}

• All epipolar lines intersect at the epipole

l’m

lm’

Page 18: Lecture 4   Reconstruction from Two Views

18

Lecture 4: Reconstruction from two viewsLecture 4: Reconstruction from two views

Epipole

Epipole

Epipolar lines

Epipolar lines

Area 1

Area 2

Correspondence

pointsZoom

Area 1

Zoom

Area 2

Epipolar geometry of Camera 1 Epipolar geometry of Camera 2

4.3 Epipolar Geometry – Modelling

Page 19: Lecture 4   Reconstruction from Two Views

19

Lecture 4: Reconstruction from two views

epipole

epipole

4.3 Epipolar Geometry – Modelling

Page 20: Lecture 4   Reconstruction from Two Views

20

Lecture 4: Reconstruction from two viewsLecture 4: Reconstruction from two views

I’I

OC’

OC

m

m’

e e’

M

OI

OI’

OW

CKC’

•The Epipolar Geometry concerns the problem of computing the plane .

• a plane is defined by the cross product between two vectors

• M is unknown, m and m’ are knowns

• {W} is located at {C} or {C’} and can be computed at {C} or {C’}

4 solutions

4.3 Epipolar Geometry – Modelling

Page 21: Lecture 4   Reconstruction from Two Views

21

Lecture 4: Reconstruction from two viewsLecture 4: Reconstruction from two views

I’I

OC’

OC

m

m’

e e’

M

OI

OI’

lm’

l’m

OW

C’K’C

•The Epipolar Geometry concerns the problem of computing the plane .

• a plane is defined by the cross product between two vectors

• M is unknown, m and m’ are knowns

• {W} is located at {C} or {C’} and can be computed at {C} or {C’}

4 solutions

CKC’

4.3 Epipolar Geometry – Modelling

Page 22: Lecture 4   Reconstruction from Two Views

22

Lecture 4: Reconstruction from two viewsLecture 4: Reconstruction from two views

I’I

OC’

OC

m

m’

e e’

M

OI

OI’

lm’l’m

OW

• Assume {W} at {C}

CKC’

P

P’

tRPKP

tRK

IP

'

0

tRtRRPe

tt

IC

Pe

ttt

1

0

1

0''

10

1

'

1

4.3 Epipolar Geometry – Modelling

Page 23: Lecture 4   Reconstruction from Two Views

23

Lecture 4: Reconstruction from two viewsLecture 4: Reconstruction from two views

• Assume {W} at {C}

I’I

OC’OC

mm’

e e’

M

O

I

OI’

lm’l’m

OW

CKC

P

P’

tRPKP

tRK

IP

'

0

tRtRRPe

tt

IC

Pe

ttt

1

0

1

0''

10

1

'

1

''''

)('''

'

1

RmtRmtmPel

mtRmtRmRtRmPel

xm

x

tttt

m

Since epipolar lines are contained in the plane , we can define the line by

a cross product of two vectors, obtaining the orthogonal vector of the line.

4.3 Epipolar Geometry – Modelling

Page 24: Lecture 4   Reconstruction from Two Views

24

Lecture 4: Reconstruction from two viewsLecture 4: Reconstruction from two views

I’I

OC’OC

mm’

e e’

M

O

I

OI’

lm’

l’m

OW

CKC

P

P’

'0

'0

Rmtm

mtRm

x

t

x

tt

The Fundamental matrix is

defined by inner product of a

point with its epipolar line.

'

'

' Rmtl

mtRl

xm

x

t

m

'

'''''

'' Rmtmlmlm

mtRmlmlm

x

t

m

t

m

x

tt

m

t

m

Orthogonal, their cosinus is 0

4.3 Epipolar Geometry – Modelling

Page 25: Lecture 4   Reconstruction from Two Views

25

Lecture 4: Reconstruction from two viewsLecture 4: Reconstruction from two views

I’I

OC’OC

mm’

e e’

M

O

I

OI’

lm’l’m

OW

CKC

P

P’

l’m

lm’

'~'' '''~

~ ~

1

1

mmmm

mmmm

AA

AA

Now we consider the

intrinsics. Points in pixels

instead of metrics

'~'~'~'~'0

~''~~'~''0

111

111

mRtmmRtmRmtm

mtRmmtRmmtRm

x

tt

x

t

x

t

x

ttt

x

tt

x

tt

AAAA

AAAA

ttttttAAAABAB

11

1

1

''

'

AA

AA

RtF

tRF

x

t

x

tt

0'~'~

0~'~

mFm

mFm

t

t

100

0

0

0

0

v

u

v

u

A

4.3 Epipolar Geometry – Modelling

Page 26: Lecture 4   Reconstruction from Two Views

26

Lecture 4: Reconstruction from two viewsLecture 4: Reconstruction from two views

FtRtRRtRtF

FRtRttRtRF

x

ttt

x

tttt

x

ttt

x

tt

x

tttt

x

tt

x

tttt

x

ttt

11

11

'''''

'''''

AAAAAAAA

AAAAAAAA

F and F’ are related by a transpose. So,

t

t

FF

FF

'

'

Demonstration:

1

1

''

'

AA

AA

RtF

tRF

x

t

x

tt

The same dissertation can be made assuming the origin at

{C’}, obtaining two more fundamental matrices that are also

equivalent to F and F’.

4.3 Epipolar Geometry – Modelling

Page 27: Lecture 4   Reconstruction from Two Views

27

Lecture 4: Reconstruction from two viewsLecture 4: Reconstruction from two views

The Essential Matrix is the calibrated case of the Fundamental matrix.

• The Intrinsic parameters are known: A and A’ are known

The problem is reduced to estimate E or E’.

1

1

''

'

AA

AA

RtF

tRF

x

t

x

tt

RtE

tRE

x

x

t

'

The monocular stereo is a symplified version of F where A = A’, reducing the

complexity of computing F.

1

1

'

AA

AA

RtF

tRF

x

t

x

tt

4.3 Epipolar Geometry – Modelling

Page 28: Lecture 4   Reconstruction from Two Views

28

Lecture 4: Reconstruction from two views

Contents

4. Reconstruction from two views

4.1 Shape from X

4.2 Triangulation principle

4.3 Epipolar geometry – Modelling

4.4 Epipolar geometry – Calibration

4.5 Constraints in stereo vision

4.6 Experimental comparison of methods

4.7 Sample: Mobile robot performing 3D mapping

Page 29: Lecture 4   Reconstruction from Two Views

29

Lecture 4: Reconstruction from two views

Contents

4. Reconstruction from two views

4.1 Shape from X

4.2 Triangulation principle

4.3 Epipolar geometry – Modelling

4.4 Epipolar geometry – Calibration

4.5 Constraints in stereo vision

4.6 Experimental comparison of methods

4.7 Sample: Mobile robot performing 3D mapping

Page 30: Lecture 4   Reconstruction from Two Views

30

Lecture 4: Reconstruction from two views

4.4 Epipolar Geometry – Calibration

' ' 0T

m m F 1 ' 0

1

i

i i i

x

x y y

F

Operating, we obtain:

0nU f

11 12 13 21 22 23 31 32 33, , , , , , , ,t

f F F F F F F F F F

, , , , , , , ,1i i i i i i i i i i i i iu x x y x x x y y y y x y

The epipolar geometry is defined as:

1 2, , ...,t

n nU u u u

The Eight Point Method

Page 31: Lecture 4   Reconstruction from Two Views

31

Lecture 4: Reconstruction from two views

1n nU f

11 12 13 21 22 23 31 32, , , , , , ,t

f F F F F F F F F

, , , , , , ,i i i i i i i i i i i i iu x x y x x x y y y y x y

F is defined up to a scale factor, so we can fix one of the

component to 1. Let’s fix F33 = 1.

First solution is :

0nU f

0f NOT WANTED

Then:

1 11n n n nU U f U

11n nf U

1

1t t

n n n nf U U U

Least-Squares

1 2, , ...,t

n nU u u u

4.4 Epipolar Geometry – Calibration

The Eight Point Method with Least Squares

Page 32: Lecture 4   Reconstruction from Two Views

32

Lecture 4: Reconstruction from two views

1'

AA x

tttRF

First solution is :

0nU f

0f NOT WANTED

0

0

0

xy

xz

yz

t x

F has to be rank-2 because [tx] is rank-2.

Any system of equations:

can be solved by SVD so that f lies in the nullspace of Un= UDVT .

[U,D,V] = svd (Un)

Hence f corresponds to a multiple of the column of V that belongs to the

unique singular value of D equal to 0.

Note that f is only known up to a scaling factor.

11 12 13 21 22 23 31 32 33, , , , , , , ,t

f F F F F F F F F F0nU f

4.4 Epipolar Geometry – Calibration

The Eight Point Method with Eigen Analysis

Page 33: Lecture 4   Reconstruction from Two Views

33

Lecture 4: Reconstruction from two views

Contents

4. Reconstruction from two views

4.1 Shape from X

4.2 Triangulation principle

4.3 Epipolar geometry – Modelling

4.4 Epipolar geometry – Calibration

4.5 Constraints in stereo vision

4.6 Experimental comparison of methods

4.7 Sample: Mobile robot performing 3D mapping

Page 34: Lecture 4   Reconstruction from Two Views

34

Lecture 4: Reconstruction from two views

Contents

4. Reconstruction from two views

4.1 Shape from X

4.2 Triangulation principle

4.3 Epipolar geometry – Modelling

4.4 Epipolar geometry – Calibration

4.5 Constraints in stereo vision

4.6 Experimental comparison of methods

4.7 Sample: Mobile robot performing 3D mapping

Page 35: Lecture 4   Reconstruction from Two Views

35

Lecture 4: Reconstruction from two viewsLecture 4: Reconstruction from two views

Extrinsics: Camera Pose

I I C W

C Ws m A K M

' ' '

' '' ' ' 'I I C W

C Ws m A K M

3D Reconstruction:

'

'; 'I I

C CA A

'

'; 'C C

W WK K

Intrinsics: Optics & Internal Geometry

I’I

OC’

OC

mm’

M

OI

OI’

OW

4.5 Constraints in stereo vision

Constraints:

• The Correspondence Problem F/E matrix

• Stereo Configurations:

• Calibrated Stereo: Intrinsics and Extrinsics known Triangulation!

• Uncalibrated Stereo: Intrinsics and Extrinsics unknown F matrix

• Calibrated Monocular: Intrinsics known, Extrinsics unknown E matrix

• Uncalibrated Monocular: Intrinsics and Extrinsics unknown F matrix

Page 36: Lecture 4   Reconstruction from Two Views

36

Lecture 4: Reconstruction from two views

Contents

4. Reconstruction from two views

4.1 Shape from X

4.2 Triangulation principle

4.3 Epipolar geometry – Modelling

4.4 Epipolar geometry – Calibration

4.5 Constraints in stereo vision

4.6 Experimental comparison of methods

4.7 Sample: Mobile robot performing 3D mapping

Page 37: Lecture 4   Reconstruction from Two Views

37

Lecture 4: Reconstruction from two views

Contents

4. Reconstruction from two views

4.1 Shape from X

4.2 Triangulation principle

4.3 Epipolar geometry – Modelling

4.4 Epipolar geometry – Calibration

4.5 Constraints in stereo vision

4.6 Experimental comparison of methods

4.7 Sample: Mobile robot performing 3D mapping

Page 38: Lecture 4   Reconstruction from Two Views

38

Lecture 4: Reconstruction from two views

4.6 Experimental comparison – methods

Linear Iterative Robust Optimisation Rank-2

Seven point (7p) X — yes

Eight point (8p) X LS or Eig. no

Rank-2 constraint X LS yes

Iterative Newton-

RaphsonX LS no

Linear iterative X LS no

Non-linear

minimization in

parameter space

X Eig. yes

Gradient technique X LS or Eig. no

FNS X AML no

CFNS X AML no

M-Estimator X LS or Eig. no / yes

LMedS X 7p / LS or Eig. no

RANSAC X 7p / Eig no

MLESAC X AML no

MAPSAC X AML no

LS: Least-Squares Eig: Eigen Analysis AML: Approximate Maximum Likelihood

Least-squares Eigen AnalysisApproximate Maximum

Likelihood

Page 39: Lecture 4   Reconstruction from Two Views

39

Lecture 4: Reconstruction from two views

Image plane camera 1 Image plane camera 2

4.6 Experimental comparison – Methodology

Page 40: Lecture 4   Reconstruction from Two Views

40

Lecture 4: Reconstruction from two viewsLecture 4: Reconstruction from two views

Methods: 1.- 7-Point; 2.- 8-Point with Least-Squares;

3.- 8-Point with Eigen Analysis 4.- Rank-2 Constraint* Mean and Std. in pixels

*

Linear methods: Good results if the points are well located and no outilers

meanstd

4.6 Experimental comparison – Synthetic images

Page 41: Lecture 4   Reconstruction from Two Views

41

Lecture 4: Reconstruction from two viewsLecture 4: Reconstruction from two views* Mean and Std. in pixels

*

Iterative methods: Can cope with noise but inefficient in the presence of outliers

Methods: 5.- Iterative Linear; 6.- Iterative Newton-Raphson;

7.- Minimization in parameter space;

8.- Gradient using LS; 9.- Gradient using Eigen;

10.- FNS; 11.- CFNS

4.6 Experimental comparison – Synthetic images

Page 42: Lecture 4   Reconstruction from Two Views

42

Lecture 4: Reconstruction from two viewsLecture 4: Reconstruction from two views* Mean and Std. in pixels

Robust methods: Cope with both noise and outliers

Methods: 12.- M-Estimator using LS; 13.- M-Estimator using Eigen;

14.- M-Estimator proposed by Torr;

15.- LMedS using LS; 16.- LMedS using Eigen;

17.- RANSAC; 18.- MLESAC; 19.- MAPSAC.

4.6 Experimental comparison – Synthetic images

Page 43: Lecture 4   Reconstruction from Two Views

43

Lecture 4: Reconstruction from two viewsLecture 4: Reconstruction from two views

1.- 7-Point; 2.- 8-Point with Least-Squares; 3.- 8-Point with Eigen Analysis; 4.- Rank-2 Constraint;

5.- Iterative Linear; 6.- Iterative Newton-Raphson; 7.- Minimization in parameter space; 8.- Gradient using LS;

9.- Gradient using Eigen; 10.- FNS; 11.- CFNS; 12.- M-Estimator using LS; 13.- M-Estimator using Eigen;

14.- M-Estimator proposed by Torr; 15.- LMedS using LS; 16.- LMedS using Eigen; 17.- RANSAC;

18.- MLESAC; 19.- MAPSAC.

Linear Iterative Robust

Computing Time

4.6 Experimental comparison – Synthetic images

Page 44: Lecture 4   Reconstruction from Two Views

44

Lecture 4: Reconstruction from two views

4.6 Experimental comparison – Real images

Page 45: Lecture 4   Reconstruction from Two Views

45

Lecture 4: Reconstruction from two views

Methods: 1.- 7-Point; 2.- 8-Point with Least-Squares;

3.- 8-Point with Eigen Analysis 4.- Rank-2 Constraint

Methods: 5.- Iterative Linear; 6.- Iterative Newton-Raphson;

7.- Minimization in parameter space;

8.- Gradient using LS; 9.- Gradient using Eigen;

10.- FNS; 11.- CFNS

Methods: 12.- M-Estimator using LS; 13.- M-Estimator using Eigen;

14.- M-Estimator proposed by Torr;

15.- LMedS using LS; 16.- LMedS using Eigen;

17.- RANSAC; 18.- MLESAC; 19.- MAPSAC.* Mean and Std. in pixels

*

4.6 Experimental comparison – Real images

Page 46: Lecture 4   Reconstruction from Two Views

46

Lecture 4: Reconstruction from two views

• Survey of 15 methods of computing F and up to 19 different

implementations

• Description of the estimators from an algorithmic point of view

• Conditions: Gaussian noise, outliers and real images

– Linear methods: Good results if the points are well located and

the correspondence problem previously solved (without outliers)

– Iterative methods: Can cope with noise but inefficient in the

presence of outliers

– Robust methods: Cope with both noise and outliers

• Least-squares is worse than eigen analysis and approximate

maximum likelihood

• Rank-2 matrices are preferred if a good geometry is required

• Better results when data are previously normalized

4.6 Experimental comparison – Conclusions

Page 47: Lecture 4   Reconstruction from Two Views

47

Lecture 4: Reconstruction from two views

Publications– X. Armangué and J. Salvi. Overall View Regarding Fundamental Matrix

Estimation. Image and Vision Computing, IVC, pp. 205-220, Vol. 21, Issue 2, February 2003.

– J. Salvi. An approach to coded structured light to obtain three dimensional information. PhD Thesis. University of Girona, 1997. Chapter 3.

– J. Salvi, X. Armangué, J. Pagès. A survey addressing the fundamental matrix estimation problem. IEEE International Conference on Image Processing, ICIP 2001, Thessaloniki, Greece, October 2001.

More Information: http://eia.udg.es/~qsalvi/

4.6 Experimental comparison – Conclusions

Page 48: Lecture 4   Reconstruction from Two Views

48

Lecture 4: Reconstruction from two views

Contents

4. Reconstruction from two views

4.1 Shape from X

4.2 Triangulation principle

4.3 Epipolar geometry – Modelling

4.4 Epipolar geometry – Calibration

4.5 Constraints in stereo vision

4.6 Experimental comparison of methods

4.7 Sample: Mobile robot performing 3D mapping

Page 49: Lecture 4   Reconstruction from Two Views

49

Lecture 4: Reconstruction from two views

Contents

4. Reconstruction from two views

4.1 Shape from X

4.2 Triangulation principle

4.3 Epipolar geometry – Modelling

4.4 Epipolar geometry – Calibration

4.5 Constraints in stereo vision

4.6 Experimental comparison of methods

4.7 Sample: Mobile robot performing 3D mapping

Page 50: Lecture 4   Reconstruction from Two Views

50

Lecture 4: Reconstruction from two views

4.7 Sample: Mobile robot performing 3D mapping

• Building a 3D map from an

unknown environment

using a stereo camera

system

• Localization of the robot in

the map

• Providing a new useful

sensor for the robot control

architecture GRILL Mobile robot with a

stereo camera system

Page 51: Lecture 4   Reconstruction from Two Views

51

Lecture 4: Reconstruction from two views

Onboard Control Computer

Microcontroller

Onboard Vision Computer

Frame Grabber A Frame Grabber B

Motor Encoder Sonar

Camera A Camera B

Wireless Ethernet

Radio video emitter

Ethernet Card

PCI Bus

Ethernet Card

PCI Bus

Ethernet Link

PC104+PC104+

USB

RS-232

Control system

Vision system

Pioneer 2

Outside stereo vision systemInside stereo vision system

GRILL Mobile Robot

4.7 3D mapping – Robot components

Page 52: Lecture 4   Reconstruction from Two Views

52

Lecture 4: Reconstruction from two views

Image Acquisition

Image Processing

Low Level

Image Processing

High Level

Description Level

Camera

LocalizationMap Building

3D Information Motion

Estimation

Correspondence

Problem

Feature

Extraction

A/D

Gradients

Filtering

CalibrationRemove

Distortion

Tracking

Camera Modelling

and Calibration

Localization and

Map Building

Stereo Vision and

Reconstruction

LocalizationMap Building

CalibrationRemove

Distortion

3D Information

Correspondence

Problem

Motion

Estimation

4.7 3D mapping – Data flow diagram

Page 53: Lecture 4   Reconstruction from Two Views

53

Lecture 4: Reconstruction from two views

2D Image Processing

3D Image Processing

Map Building and

Localization

Camera A Camera B

Sequence A Sequence B

2D Points

3D Points

3D Points Position

3D Map Trajectory

Image Flow

2D Points Flow

3D Points Flow

Position Flow

4.7 3D mapping – Data flow diagram

Page 54: Lecture 4   Reconstruction from Two Views

54

Lecture 4: Reconstruction from two views

2D Image Processing

3D Image Processing

Map Building and

Localization

RGB to I RGB to I

Remove

Distortion

Remove

Distortion

Corners Corners

Spatial Cross

Correlation

Temp. Cross

Correlation

Temp. Cross

Correlation

Image Points

Buffer A

Image Points

Buffer B

Camera A Camera B

Image

Buffer B

Image

Buffer A

Stereo

Reconstruction

3D TrackerOutliers

Detection

Local

Localization

Global

Localization

Build 3D Map

3D Map Trajectory

4.7 3D mapping – Data flow diagram

Page 55: Lecture 4   Reconstruction from Two Views

55

Lecture 4: Reconstruction from two views

RGB to I RGB to I

Remove

Distortion

Remove

Distortion

Corners Corners

Spatial Cross

Correlation

Temp. Cross

Correlation

Temp. Cross

Correlation

Image Points

Buffer A

Image Points

Buffer B

Camera A Camera B

Image

Buffer B

Image

Buffer A

Stereo

Reconstruction

3D TrackerOutliers

Detection

Local

Localization

Global

Localization

Build 3D Map

3D Map Trajectory

• Cameras are calibrated

• Both stereo images are

obtained simultaneously

4.7 3D mapping – Input sequence

Page 56: Lecture 4   Reconstruction from Two Views

56

Lecture 4: Reconstruction from two views

RGB to I RGB to I

Remove

Distortion

Remove

Distortion

Corners Corners

Spatial Cross

Correlation

Temp. Cross

Correlation

Temp. Cross

Correlation

Image Points

Buffer A

Image Points

Buffer B

Camera A Camera B

Image

Buffer B

Image

Buffer A

Stereo

Reconstruction

3D TrackerOutliers

Detection

Local

Localization

Global

Localization

Build 3D Map

3D Map Trajectory

4.7 3D mapping – RGB to I

• Description

– Converting a color image

to an intensity image

• Input

– Color image (RGB)

• Output

– Intensity image

Page 57: Lecture 4   Reconstruction from Two Views

57

Lecture 4: Reconstruction from two views

RGB to I RGB to I

Remove

Distortion

Remove

Distortion

Corners Corners

Spatial Cross

Correlation

Temp. Cross

Correlation

Temp. Cross

Correlation

Image Points

Buffer A

Image Points

Buffer B

Camera A Camera B

Image

Buffer B

Image

Buffer A

Stereo

Reconstruction

3D TrackerOutliers

Detection

Local

Localization

Global

Localization

Build 3D Map

3D Map Trajectory

• Description

– Removing distortion of an

image using camera

calibration parameters

• Input

– Distorted image

• Output

– Undistorted image

4.7 3D mapping – Remove Distortion

Intensity ImagesUndistorted Images

Page 58: Lecture 4   Reconstruction from Two Views

58

Lecture 4: Reconstruction from two views

RGB to I RGB to I

Remove

Distortion

Remove

Distortion

Corners Corners

Spatial Cross

Correlation

Temp. Cross

Correlation

Temp. Cross

Correlation

Image Points

Buffer A

Image Points

Buffer B

Camera A Camera B

Image

Buffer B

Image

Buffer A

Stereo

Reconstruction

3D TrackerOutliers

Detection

Local

Localization

Global

Localization

Build 3D Map

3D Map Trajectory

4.7 3D mapping – Corners

• Description

– Detection of corners using

a variant of Harris corners

detector

• Input

– Undistorted image

• Output

– Corners list

Undistorted ImagesCorners Detected

Page 59: Lecture 4   Reconstruction from Two Views

59

Lecture 4: Reconstruction from two views

RGB to I RGB to I

Remove

Distortion

Remove

Distortion

Corners Corners

Spatial Cross

Correlation

Temp. Cross

Correlation

Temp. Cross

Correlation

Image Points

Buffer A

Image Points

Buffer B

Camera A Camera B

Image

Buffer B

Image

Buffer A

Stereo

Reconstruction

3D TrackerOutliers

Detection

Local

Localization

Global

Localization

Build 3D Map

3D Map Trajectory

4.7 3D mapping – Spatial Cross Correlation

• Description

– Spatial cross correlation

using fundamental matrix

obtained from camera

calibration parameters

• Input

– Undistorted image A

– Corners list A

– Undistorted image B

– Corners list B

• Output

– Spatial points list

– Spatial matches list

Corners Detected Points and matches list

Page 60: Lecture 4   Reconstruction from Two Views

60

Lecture 4: Reconstruction from two views

RGB to I RGB to I

Remove

Distortion

Remove

Distortion

Corners Corners

Spatial Cross

Correlation

Temp. Cross

Correlation

Temp. Cross

Correlation

Image Points

Buffer A

Image Points

Buffer B

Camera A Camera B

Image

Buffer B

Image

Buffer A

Stereo

Reconstruction

3D TrackerOutliers

Detection

Local

Localization

Global

Localization

Build 3D Map

3D Map Trajectory

4.7 3D mapping – Temporal Cross Correlation

• Description

– Temporal cross correlation

using small windows

search

• Input

– Previous undistorted

image

– Previous corners list

– Current undistorted image

– Current corners list

• Output

– Temporal points list

– Temporal matches list

Corners Detected Points and matches list

Page 61: Lecture 4   Reconstruction from Two Views

61

Lecture 4: Reconstruction from two views

RGB to I RGB to I

Remove

Distortion

Remove

Distortion

Corners Corners

Spatial Cross

Correlation

Temp. Cross

Correlation

Temp. Cross

Correlation

Image Points

Buffer A

Image Points

Buffer B

Camera A Camera B

Image

Buffer B

Image

Buffer A

Stereo

Reconstruction

3D TrackerOutliers

Detection

Local

Localization

Global

Localization

Build 3D Map

3D Map Trajectory

4.7 3D mapping – Stereo Reconstruction

• Description

– Stereo reconstruction by

triangulation using camera

calibration parameters

• Input

– Spatial points list

– Spatial matches list

• Output

– 3D points list

Points and matches list

3D points list

Page 62: Lecture 4   Reconstruction from Two Views

62

Lecture 4: Reconstruction from two views

RGB to I RGB to I

Remove

Distortion

Remove

Distortion

Corners Corners

Spatial Cross

Correlation

Temp. Cross

Correlation

Temp. Cross

Correlation

Image Points

Buffer A

Image Points

Buffer B

Camera A Camera B

Image

Buffer B

Image

Buffer A

Stereo

Reconstruction

3D TrackerOutliers

Detection

Local

Localization

Global

Localization

Build 3D Map

3D Map Trajectory

4.7 3D mapping – 3D Tracker

• Description

– Tracking 3D points using

temporal cross correlation

• Input

– 3D points list

– Temporal points list A

– Temporal matches list A

– Temporal point list B

– Temporal matches list B

• Output

– 3D points history

– Points history A

– Matches history A

– Points history B

– Matches history B

Points and matches tracker

3D points

tracker

Page 63: Lecture 4   Reconstruction from Two Views

63

Lecture 4: Reconstruction from two views

RGB to I RGB to I

Remove

Distortion

Remove

Distortion

Corners Corners

Spatial Cross

Correlation

Temp. Cross

Correlation

Temp. Cross

Correlation

Image Points

Buffer A

Image Points

Buffer B

Camera A Camera B

Image

Buffer B

Image

Buffer A

Stereo

Reconstruction

3D TrackerOutliers

Detection

Local

Localization

Global

Localization

Build 3D Map

3D Map Trajectory

4.7 3D mapping – Outliers Detection

• Description

– Detection of outliers

comparing distance

between current and

previous 3D points list

• Input

– Odometry position

– Current 3D points list

– Previous 3D points list

• Output

– Outliers list

Current and previous 3D points with outliers

Outlier Example

Current and Previous 3D points without outliers

Page 64: Lecture 4   Reconstruction from Two Views

64

Lecture 4: Reconstruction from two views

RGB to I RGB to I

Remove

Distortion

Remove

Distortion

Corners Corners

Spatial Cross

Correlation

Temp. Cross

Correlation

Temp. Cross

Correlation

Image Points

Buffer A

Image Points

Buffer B

Camera A Camera B

Image

Buffer B

Image

Buffer A

Stereo

Reconstruction

3D TrackerOutliers

Detection

Local

Localization

Global

Localization

Build 3D Map

3D Map Trajectory

4.7 3D mapping – Local Localization

• Description

– Computing the absolute

position from the map by

minimizing distance

between the projection of

3D map points in cameras

and current 2D points and

matches.

• Input

– Odometry position as

initial guest

– Previous 3D points list

from the map

– Current 2D points list

– Current 2D matches list

• Output

– Absolute position

Map absolute position

Page 65: Lecture 4   Reconstruction from Two Views

65

Lecture 4: Reconstruction from two views

4.7 3D mapping – Global Localization

• Description

– Computing the trajectory

effect by the robot

• Input

– Local position

• Output

– Global position

RGB to I RGB to I

Remove

Distortion

Remove

Distortion

Corners Corners

Spatial Cross

Correlation

Temp. Cross

Correlation

Temp. Cross

Correlation

Image Points

Buffer A

Image Points

Buffer B

Camera A Camera B

Image

Buffer B

Image

Buffer A

Stereo

Reconstruction

3D TrackerOutliers

Detection

Local

Localization

Global

Localization

Build 3D Map

3D Map Trajectory

Page 66: Lecture 4   Reconstruction from Two Views

66

Lecture 4: Reconstruction from two views

RGB to I RGB to I

Remove

Distortion

Remove

Distortion

Corners Corners

Spatial Cross

Correlation

Temp. Cross

Correlation

Temp. Cross

Correlation

Image Points

Buffer A

Image Points

Buffer B

Camera A Camera B

Image

Buffer B

Image

Buffer A

Stereo

Reconstruction

3D TrackerOutliers

Detection

Local

Localization

Global

Localization

Build 3D Map

3D Map Trajectory

4.7 3D mapping – Building 3D Map

• Description

– Building the 3D map from

the 3D points with a

history longer than n times

• Input

– Global position

– Current 3D points list

– Previous 3D points list

• Output

– Global position

3D Map

Page 67: Lecture 4   Reconstruction from Two Views

67

Lecture 4: Reconstruction from two views

4.7 3D mapping – Video