# siggraph 2012 computational plenoptic imaging course - 4 light fields

25
Light Field Acquisition Douglas Lanman NVIDIA Research

Post on 17-May-2015

4.621 views

Category:

## Education

Tags:

• #### d directionrequire

DESCRIPTION

SIGGRAPH 2012 Computational Plenoptic Imaging Course - 4 Light Fields

TRANSCRIPT

Light Field Acquisition

Douglas Lanman

NVIDIA Research

Introduction to Light Fields

The 5D Plenoptic Function

Q: What is the set of all things that one can ever see?

A: The Plenoptic Function [Adelson and Bergen 1991]

(from plenus, complete or full, and optic)

P( , , q f l, t)

The 5D Plenoptic Function

Q: What is the set of all things that one can ever see?

A: The Plenoptic Function [Adelson and Bergen 1991]

(from plenus, complete or full, and optic)

P(q, f, l, t)

The 5D Plenoptic Function

P( , , q f l, t, px, py, pz )

P( , , q f l, t, px, py, pz ) defines the intensity of light:

• as a function of viewpoint• as a function of time• as a function of wavelength

The 5D Plenoptic Function

P(q, f, l, t, px, py, pz )

P( , , q f l, t, px, py, pz ) defines the intensity of light:

• as a function of viewpoint• as a function of time• as a function of wavelength

The 5D Plenoptic Function

Let’s ignore color and time (i.e., these are attributes of rays)…

The plenoptic function is 5D:• 3D position• 2D direction

Require 5D to represent attributes across occlusions

P( , , q f px, py, pz)

The 4D Light Field

Consider a region free of occluding objects…

The plenoptic function (light field) is 4D• 2D position• 2D direction

The space of all lines in a 3D space is 4D

P( , , q f px, py, pz)

[Levoy and Hanrahan 1996; Gortler et al. 1996]

Position-Angle Parameterization

2D position

2D direction

sq

Two-Plane Parameterization

2D position

2D position

su

Alternative Parameterizations

Left: Points on a plane or curved surface and directions leaving each point

Center: Pairs of points on the surface of a sphere

Right: Pair of points on two different (typically parallel) planes

Image-Based Rendering

[Levoy and Hanrahan 1996] [Gortler et al. 1996]

[Carranza et al. 2003]

Digital Image Refocusing

[Ng 2005]

Kodak 16-megapixel sensor

125μ square-sided microlenses

3D Displays

Parallax Panoramagram[Kanolt 1918]

3DTV with Integral Imaging[Okano et al. 1999]

MERL 3DTV[Matusik and Pfister 2004]

Multiple Sensors

Static Camera Arrays

Stanford Multi-Camera Array 125 cameras using custom hardware

[Wilburn et al. 2002, Wilburn et al. 2005]

Distributed Light Field Camera64 cameras with distributed rendering[Yang et al. 2002]

Temporal Multiplexing

Controlled Camera or Object Motion

Stanford Spherical Gantry[Levoy and Hanrahan 1996]

Relighting with 4D Incident Light Fields[Masselus et al. 2003]

Uncontrolled Camera or Object Motion

Unstructured Lumigraph Rendering[Gortler et al. 1996; Buehler et al. 2001]

Spatial Multiplexing

Parallax Barriers (Pinhole Arrays)

[Ives 1903]

sensor

barrier

Spatially-multiplexed light field capture using masks (i.e., barriers):• Cause severe attenuation long exposures or lower SNR• Impose fixed trade-off between spatial and angular resolution (unless implemented with programmable masks, e.g. LCDs)

Light Field Photograph (Sensor)

Light Field Photograph (Decoded)

[The (New) Stanford Light Field Archive]

loo

kin

g u

p

looking to the right

Sample Image

Integral Imaging (“Fly’s Eye” Lenslets)

[Lippmann 1908]

sensor

lenslet

f

Spatially-multiplexed light field capture using lenslets:• Impose fixed trade-off between spatial and angular resolution

Modern, Digital Implementations

Digital Light Field Photography• Hand-held plenoptic camera [Ng et al. 2005]• Heterodyne light field camera [Veeraraghavan et al. 2007]