dan witzner hansen. previously??? projections pinhole cameras perspective projection camera...

Post on 19-Dec-2015

216 Views

Category:

Documents

2 Downloads

Preview:

Click to see full reader

TRANSCRIPT

CAMERAS AND PROJECTIONS

Dan Witzner Hansen

OUTLINE

Previously??? Projections Pinhole cameras Perspective projection

Camera matrix Camera calibration matrix Ortographic projection

PROJECTION AND PERSPECTIVE EFFECTS

CAMERA OBSCURA

"Reinerus Gemma-Frisius, observed an eclipse of the sun at Louvain on January 24, 1544, and later he used this illustration of the event in his book De Radio Astronomica et Geometrica, 1545. It is thought to be the first published illustration of a camera obscura..." Hammond, John H., The Camera Obscura, A Chronicle

http://www.acmi.net.au/AIC/CAMERA_OBSCURA.html

PINHOLE CAMERA

Pinhole camera is a simple model to approximate imaging process, perspective projection.

Fig from Forsyth and Ponce

If we treat pinhole as a point, only one ray from any given point can enter the camera.

Virtual image pinhol

e

CAMERA OBSCURA

Jetty at Margate England, 1898.

Adapted from R. Duraiswami

http://brightbytes.com/cosite/collection2.html

Around 1870sAn attraction in the late 19th century

CAMERA OBSCURA AT HOME

Sketch from http://www.funsci.com/fun3_en/sky/sky.htmhttp://room-camera-obscura.blogspot.com/

DIGITAL CAMERAS

Film sensor array Often an array of charged

coupled devices Each CCD is light sensitive

diode that converts photons (light energy) to electrons

cameraCCD array optics frame

grabbercomputer

COLOR SENSING IN DIGITAL CAMERAS

Source: Steve Seitz

Estimate missing components from neighboring values(demosaicing)

Bayer grid

PINHOLE SIZE / APERTURE

Smaller

Larger

How does the size of the aperture affect the image we’d get?

ADDING A LENS

A lens focuses light onto the film There is a specific distance at which objects are “in

focus” other points project to a “circle of confusion” in the image

Changing the shape of the lens changes this distance

“circle of confusion”

LENSES

A lens focuses parallel rays onto a single focal point focal point at a distance f beyond the plane of the lens

f is a function of the shape and index of refraction of the lens

Aperture of diameter D restricts the range of rays aperture may be on either side of the lens

Lenses are typically spherical (easier to produce)

focal point

F

optical center(Center Of Projection)

THIN LENSES

Thin lens equation:

Any object point satisfying this equation is in focus What is the shape of the focus region? How can we change the focus region? Thin lens applet: http://www.phy.ntnu.edu.tw/java/Lens/lens_e.html (by Fu-Kwun

Hwang )

FOCUS AND DEPTH OF FIELD

Image credit: cambridgeincolour.com

FOCUS AND DEPTH OF FIELD

Depth of field: distance between image planes where blur is tolerable

Thin lens: scene points at distinct depths come in focus at different image planes.

(Real camera lens systems have greater depth of field.)

Shapiro and Stockman

“circles of confusion”

DEPTH OF FIELD

Changing the aperture size affects depth of field A smaller aperture increases the range in which the

object is approximately in focus

f / 5.6

f / 32

Flower images from Wikipedia http://en.wikipedia.org/wiki/Depth_of_field

DEPTH FROM FOCUS

[figs from H. Jin and P. Favaro, 2002]

Images from same point of view, different camera parameters

3d shape / depth estimates

ISSUES WITH DIGITAL CAMERAS

Noise big difference between consumer vs. SLR-style cameras low light is where you most notice noise

Compression creates artifacts except in uncompressed formats (tiff, raw)

Color color fringing artifacts from Bayer patterns

Blooming charge overflowing into neighboring pixels

In-camera processing oversharpening can produce halos

Interlaced vs. progressive scan video even/odd rows from different exposures

Are more megapixels better? requires higher quality lens noise issues

Stabilization compensate for camera shake (mechanical vs. electronic)More info online, e.g.,

• http://electronics.howstuffworks.com/digital-camera.htm • http://www.dpreview.com/

MODELING PROJECTIONS

PERSPECTIVE EFFECTS

PERSPECTIVE AND ART

Use of correct perspective projection indicated in 1st century B.C. frescoes

Skill resurfaces in Renaissance: artists develop systematic methods to determine perspective projection (around 1480-1515)

Durer, 1525Raphael

MODELING PROJECTION

PERSPECTIVE EFFECTS

Far away objects appear smaller

Forsyth and Ponce

PERSPECTIVE EFFECTS

Parallel lines in the scene intersect in the image

Converge in image on horizon lineImage plane(virtual)

Scene

pinhole

Pinhole camera model

TT ZfYZfXZYX )/,/(),,(

101

0

0

1

Z

Y

X

f

f

Z

fY

fX

Z

Y

X

FIELD OF VIEW

Angular measure of portion of 3D space seen by the camera

Depends on focal length

Images from http://en.wikipedia.org/wiki/Angle_of_view

Pinhole camera model

101

0

0

Z

Y

X

f

f

Z

fY

fX

101

01

01

1Z

Y

X

f

f

Z

fY

fX

PXx

0|I)1,,(diagP ff

Principal point offset

Tyx

T pZfYpZfXZYX )/,/(),,(

principal pointTyx pp ),(

101

0

0

1

Z

Y

X

pf

pf

Z

ZpfY

ZpfX

Z

Y

X

y

x

x

x

camX0|IKx

1y

x

pf

pf

Kcalibration matrix:

Camera rotation and translation

˜ X cam = R ˜ X - ˜ C ( )

X10

RCR

1

10

C~

RRXcam

Z

Y

X

x = K I | 0[ ] Xcam

x = KR I | − ˜ C [ ] X

t|RKP C~

Rt PXx

1yx

xx

p

p

K

11y

x

x

x

pf

pf

m

m

K

CCD CAMERA

When is skew non-zero?

1yx

xx

p

ps

K

1 g

arctan(1/s)

for CCD/CMOS, always s=0

Image from image, s≠0 possible(non coinciding principal axis)

HPresulting camera:

Projection equation

• The projection matrix models the cumulative effect of all parameters• Useful to decompose into a series of operations

ΠXx

1****

****

****

Z

Y

X

s

sy

sx

110100

0010

0001

100

'0

'0

31

1333

31

1333

x

xx

x

xxcy

cx

yfs

xfs

00

0 TIRΠ

projectionintrinsics rotation translation

identity matrix

CAMERA PARAMETERSA camera is described by several parameters

• Translation T of the optical center from the origin of world coords• Rotation R of the image plane• focal length f, principle point (x’c, y’c), pixel size (sx, sy)

• blue parameters are called “extrinsics,” red are “intrinsics”

• The definitions of these parameters are not completely standardized– especially intrinsics—varies from one book to another

PROJECTION PROPERTIES

Many-to-one: any points along same ray map to same point in image

Points points Lines lines (collinearity preserved) Distances and angles are not preserved Degenerate cases:

– Line through focal point projects to a point.– Plane through focal point projects to line– Plane perpendicular to image plane projects to

part of the image.

CAMERAS?

More about camera calibration later in the course.

We will see more about what information can be gathered from the images using knowledge of planes and calibrated cameras

(0,0,0)

THE PROJECTIVE PLANE Why do we need homogeneous coordinates?

represent points at infinity, homographies, perspective projection, multi-view relationships

What is the geometric intuition? a point in the image is a ray in projective space

(sx,sy,s)

• Each point (x,y) on the plane is represented by a ray (sx,sy,s)– all points on the ray are equivalent: (x, y, 1) (sx, sy, s)

image plane

(x,y,1)-y

x-z

PROJECTIVE LINES What does a line in the image

correspond to in projective space?

• A line is a plane of rays through origin– all rays (x,y,z) satisfying: ax + by + cz = 0

z

y

x

cba0 :notationvectorin

• A line is also represented as a homogeneous 3-vector l

l p

l

POINT AND LINE DUALITY A line l is a homogeneous 3-vector It is to every point (ray) p on the

line: l p=0

p1p2

What is the intersection of two lines l1 and l2 ?

• p is to l1 and l2 p = l1 l2

Points and lines are dual in projective space• given any formula, can switch the meanings of points and

lines to get another formula

l1

l2

p

What is the line l spanned by rays p1 and p2 ?

• l is to p1 and p2 l = p1 p2

• l is the plane normal

IDEAL POINTS AND LINES

Ideal point (“point at infinity”) p (x, y, 0) – parallel to image plane It has infinite image coordinates

(sx,sy,0)-y

x-z image plane

Ideal line• l (a, b, 0) – parallel to image plane

(a,b,0)-y

x-z image plane

• Corresponds to a line in the image (finite coordinates)– goes through image origin (principle point)

INTERPRETING THE CAMERA MATRIX COLUMN VECTORS

p1, p2 p3 are the vanishing points along the X,Y,Z axisP4 is the camera center in world coordinates

INTERPRETING THE CAMERA MATRIX ROW VECTORS

P3 is the principal plane containing the camera center is parallel to imageplane. P1, P2 are axes planes formed by Y and X axes and cameracenter.

motion parallax

epipolar line

MOVING THE CAMERA CENTER

WEAK PERSPECTIVE

Approximation: treat magnification as constant

Assumes scene depth << average distance to camera

World points:

Image plane

ORTHOGRAPHIC PROJECTION

Given camera at constant distance from scene

World points projected along rays parallel to optical access

ORTHOGRAPHIC (“TELECENTRIC”) LENSES

http://www.lhup.edu/~dsimanek/3d/telecent.htm

Navitar telecentric zoom lens

CORRECTING RADIAL DISTORTION

from Helmut Dersch

DISTORTION

Radial distortion of the image Caused by imperfect lenses Deviations are most noticeable for rays

that pass through the edge of the lens

No distortion Pin cushion Barrel

MODELING DISTORTION

To model lens distortion Use above projection operation instead of standard

projection matrix multiplication

Apply radial distortion

Apply focal length translate image center

Project to “normalized”

image coordinates

OTHER CAMERA TYPES

Omnidirectional image

Time-of-flight

Plenoptic camera

SUMMARY

Image formation affected by geometry, photometry, and optics.

Projection equations express how world points mapped to 2D image.

Homogenous coordinates allow linear system for projection equations.

Lenses make pinhole model practical. Parameters (focal length, aperture, lens

diameter,…) affect image obtained.

WHAT DOES CAMERA CALIBRATION GIVE

Calculate distances to objects with known size.

Calculate angles • Rotations between two views of a plane (see later) •Perpendicular vectors

Calibration can be done automatically with multiple cameras

INFORMATION FROM PLANES AND CAMERAS

An image line l defines a plane through the camera center with normal n=KTl measured in the camera’s Euclidean frame (e.g. orientation)

Given K and a homograpy, two possible normals and orientations are possible

2T

21T

1

2T

1

ωvvωvv

ωvvcos

0ωvv 2T

1 0lωl 2*T

1

1-T-1T KKKKω

ANGLES AND ORTHOGONALITY RELATION

Image of Absolute Conic (IAC)

HOMOGRAPHIES REVISITED

A SPECIAL CASE, PLANES

Image plane(retina, film, canvas)

Observer

World plane

2D 2D

A plane to plane projective transformation

Homography matrix (3x3)

APPLICATIONS OF HOMOGRAPHIES

RECTIFYING SLANTED VIEWS

Corrected image (front-to-parallel)

xKRK0]X|K[Rx'0]X|K[Ix

1-

-1KRKH

CAMERA ROTATION

1ppp

10ppppPXx 4214321 Y

XYX

The most general transformation that can occur between a scene plane and an image plane under perspective imaging is a plane projective transformation

]|[KP tR ],,[ 21 trrKH

Example: Homography between world plane Z=0 and image

implies

ACTION OF PROJECTIVE CAMERA ON PLANES

HOMOGRAPHY ESTIMATION

Number of measurements needed for estimating H

How do do we estimate H automatically We need to consider robustness to

outliers

CROSS PRODUCT AS MATRIX MULTIPLICATION

Recall: If a = (xa, ya, za)T and b = (xb, yb, zb)T,

then: c = a x b = (ya zb - za yb, za xb - xa zb, xa yb - ya

xb)T

0

0

0

12

13

23

aa

aa

aa

a

ESTIMATING H: DLT ALGORITHM

x0i = Hxi is an equation involving

homogeneous vectors, so Hxi and x0i

need only be in the same direction, not strictly equal

We can specify “same directionality” by using a cross product formulation:

See Hartley & Zisserman, Chapter 3.1-3.1.1

HOMOGRAPHY ESTIMATION

How would solve this equation (remember that it the elements in H that needs to be estimated)?

Each pair (x,x’) gives us 2 pieces of information and therefore at least 4 point correspondenses are needed.

Duality and other :Lines and conics may also be used

A h 02n × 9 9 2n

pp’

HOMOGRAPHIES

DLT ALGORITHM

ObjectiveGiven n≥4 2D to 2D point correspondences {xi↔xi’}, determine the 2D homography matrix H such that xi’=Hxi

Algorithm

(i) For each correspondence xi ↔xi’ compute Ai. Usually only two first rows needed.

(ii) Assemble n 2x9 matrices Ai into a single 2nx9 matrix A

(iii) Obtain SVD of A. Solution for h is last column of V

(iv) Determine H from h (i.e. reshape from vector to matrix form)

NORMALIZING TRANSFORMATIONS

DLT is not invariant,what is a good choice of coordinates?e.g. Translate centroid to origin Scale to a average distance to the origin Independently on both images

2

1

norm

100

2/0

2/0

T

hhw

whwOr

NORMALIZED DLT ALGORITHM

ObjectiveGiven n≥4 2D to 2D point correspondences {xi↔xi’}, determine the 2D homography matrix H such that xi’=Hxi

Algorithm

(i) Normalize points

(ii) Apply DLT algorithm to

(iii) Denormalize solution

,x~x~ ii inormiinormi xTx~,xTx~

norm-1

norm TH~

TH

CAMERA CALIBRATION

CAMERA CALIBRATION

HOW CAN CAMERA CALIBRATION BE DONE?

Any suggestions?

ii PXx

xi[ ]×PX i

0Ap

BASIC EQUATIONS

5.5 corresponding points needed

(= Zhang’s calibration method)

A SIMPLE CALIBRATION DEVICE

1. Compute H for each square (corners (0,0),(1,0),(0,1),(1,1)

2. Compute the imaged circular points H(1,±i,0)T

3. Fit a conic to 6 circular points

4. Compute K from w through cholesky factorization

VANISHING LINES, POINTS AND CALIBRATED CAMERAS

78

VANISHING POINTS

Properties Any two parallel lines have the same vanishing point v The ray from C through v is parallel to the lines An image may have more than one vanishing point

in fact every pixel is a potential vanishing point

image plane

cameracenter

C

line on ground plane

vanishing point v

line on ground plane

VANISHING POINTS LINES

2121012120 ))()((2))()(( lllllllllll TT Vanishing line from 3 lines:

80

VANISHING POINTSimage plane

line on ground plane

vanishing point v

Vanishing point• projection of a point at infinity

cameracenter

C

81

q1

COMPUTING VANISHING POINTS (FROM LINES)

Intersect p1q1 with p2q2

v

p1

p2

q2

Least squares version• Better to use more than two lines and compute the “closest” point of

intersection• See notes by Bob Collins for one good way of doing this:

– http://www-2.cs.cmu.edu/~ph/869/www/notes/vanishing.txt

82

0/1

/

/

/

1Z

Y

X

ZZ

YY

XX

ZZ

YY

XX

t D

D

D

t

t

DtP

DtP

DtP

tDP

tDP

tDP

PP

COMPUTING VANISHING POINTS

Properties P is a point at infinity, v is its projection They depend only on line direction Parallel lines P0 + tD, P1 + tD intersect at P

V

DPP t 0

ΠPv

P0

D

83

FUN WITH VANISHING POINTS

84

PERSPECTIVE CUES

85

PERSPECTIVE CUES

86

PERSPECTIVE CUES

VANISHING POINTS AND ANGLES

Projection of a direction d:

Back project:

0

d0]|K[I0]X|K[Ix

xKd 1

21-T-T

211-T-T

1

2-1-TT

1

2T

21T

1

2T

1

)xK(Kx)xK(Kx

)xK(Kx

dddd

ddcos

NEXT WEEK – NO LECTURES

WORK ON MANDATORY ASSIGNMENT

top related