lensless imaging richard baraniuk rice university ashok veeraraghavan rice aswin sankaranarayanan...

19
Lensless Imaging Richard Baraniuk Rice University Ashok Veeraraghavan Rice Aswin Sankaranarayanan CMU John Rogers UIUC

Upload: cathleen-berry

Post on 25-Dec-2015

218 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Lensless Imaging Richard Baraniuk Rice University Ashok Veeraraghavan Rice Aswin Sankaranarayanan CMU John Rogers UIUC

Lensless Imaging

Richard Baraniuk

Rice University

Ashok VeeraraghavanRice

Aswin SankaranarayananCMU

John RogersUIUC

Page 2: Lensless Imaging Richard Baraniuk Rice University Ashok Veeraraghavan Rice Aswin Sankaranarayanan CMU John Rogers UIUC

Re-Imagining Imaging

• Conventional camera design– based on human visual system model– objective lens: directs a cone of light into the camera– 1-to-1 correspondence between scene points and camera

pixels makes imaging easy

Page 3: Lensless Imaging Richard Baraniuk Rice University Ashok Veeraraghavan Rice Aswin Sankaranarayanan CMU John Rogers UIUC

Re-Imagining Imaging

• Conventional camera design– based on human visual system model– objective lens: directs a cone of light into the camera– 1-to-1 correspondence between scene points and camera

pixels makes imaging easy

• Goal: A large, potentially flexible, imaging platform capable of distributed acquisition of light fields– inspired by distributed light sensing in cephalopod skin

• Approach: Lensless imaging– leverage recent progress in coded aperture and

compressive sensing– exciting opportunities for flat and flexible cameras

Page 4: Lensless Imaging Richard Baraniuk Rice University Ashok Veeraraghavan Rice Aswin Sankaranarayanan CMU John Rogers UIUC

• With incoherent light, no phase information– all photo-detectors measure roughly the same information,

the average light level of the scene

sensor scene

photo-detector 1

photo-detector 2

Problem

Page 5: Lensless Imaging Richard Baraniuk Rice University Ashok Veeraraghavan Rice Aswin Sankaranarayanan CMU John Rogers UIUC

Solution

• Add a mask in front of the sensor/photo-detector– attenuates certain rays of light

• How does this help?– same scene point is attenuated differently at different photo-

detectors– each photo-detector sees a different linear combination of the

scene points– can design mask(s) such that we can recover a high-resolution

version of the scene (compressive sensing)

mask

sensor scene

photo-detector 1

photo-detector 2

Page 6: Lensless Imaging Richard Baraniuk Rice University Ashok Veeraraghavan Rice Aswin Sankaranarayanan CMU John Rogers UIUC

Mask Design

• Random mask– rich theory and algorithms available

from compressive sensing– provable recovery bounds

measurements sparsesignal

nonzeroentries

super sub-Nyquist measurement

Page 7: Lensless Imaging Richard Baraniuk Rice University Ashok Veeraraghavan Rice Aswin Sankaranarayanan CMU John Rogers UIUC

• Random mask– rich theory and algorithms available

from compressive sensing– provable recovery bounds– major impact in a variety of DOD

and industrial sensing systems:medical, radar, sonar, hyperspectral, IR, THz imaging, …

Mask Design

Page 8: Lensless Imaging Richard Baraniuk Rice University Ashok Veeraraghavan Rice Aswin Sankaranarayanan CMU John Rogers UIUC

mask(s)sensor scene X

10 u

nit

s

10 units

1/10 unit

Simulations

= f(

sensor measurements

mask

)X

noiseless systemPSNR = 19dB

noisy systemPSNR = 15dB

image

Page 9: Lensless Imaging Richard Baraniuk Rice University Ashok Veeraraghavan Rice Aswin Sankaranarayanan CMU John Rogers UIUC

Planar Prototype

walls to limit FOV

mask

gap (0.5mm)

mount

sensor

• Sensor: Flea3 Point Grey camera, 1024x1280 pixels (5.3µm each)

• Mask: Random binary mask (1)135x135 features (85µm each)10% of mask “pixels” transparent

• Target projected on screen 15cm from camera (target height/width 24cm)

sensor-mask assembly

target on monitor

sensor image

Page 10: Lensless Imaging Richard Baraniuk Rice University Ashok Veeraraghavan Rice Aswin Sankaranarayanan CMU John Rogers UIUC

Planar Prototype Results 1

targeton screen

recoveredimage• Sensor: Flea3 Point Grey camera,

1024x1280 pixels (5.3µm each)

• Mask: Random binary mask (1)135x135 features (85µm each)

10% of mask “pixels” transparent

• Target projected on screen 15cm from camera (target height/width 24cm)

• Reconstruction: Least-squares

• Ongoing: High-resolution reconstruction leveraging sparsity and priors on scene structure

Page 11: Lensless Imaging Richard Baraniuk Rice University Ashok Veeraraghavan Rice Aswin Sankaranarayanan CMU John Rogers UIUC

Planar Prototype Results 2

targeton screen

recoveredimage• Sensor: Flea3 Point Grey camera,

1024x1280 pixels (5.3µm each)

• Mask: Random binary mask (1)270x270 features (42µm each)

10% of mask “pixels” transparent

• Target projected on screen 15cm from camera (target height/width 24cm)

• Reconstruction: Least-squares

• Ongoing: High-resolution reconstruction leveraging sparsity and priors on scene structure

Page 12: Lensless Imaging Richard Baraniuk Rice University Ashok Veeraraghavan Rice Aswin Sankaranarayanan CMU John Rogers UIUC

Color Images

Page 13: Lensless Imaging Richard Baraniuk Rice University Ashok Veeraraghavan Rice Aswin Sankaranarayanan CMU John Rogers UIUC

From Challenge …

• With a planar sensor, must limit the field of view

• Outside field of view, image recovery becomes increasingly ill-posed

mask(s)sensor

scene X

Page 14: Lensless Imaging Richard Baraniuk Rice University Ashok Veeraraghavan Rice Aswin Sankaranarayanan CMU John Rogers UIUC

… To Opportunity

• Without the need for a lens, we can make the mask and sensor curved

• Example: (Hemi)spherical camera– 180/360 degree field of view– no field-of-view ill-posedness!

with John Rogers

sensorarray

mask(s)

Page 15: Lensless Imaging Richard Baraniuk Rice University Ashok Veeraraghavan Rice Aswin Sankaranarayanan CMU John Rogers UIUC

… To Opportunity

• Without the need for a lens, we can make the mask and sensor curved

• Example: (Hemi)spherical camera– 180/360 degree field of view– no field-of-view ill-posedness!

spherical sensor array

scene projected on sphere

un-warped recovered image

Page 16: Lensless Imaging Richard Baraniuk Rice University Ashok Veeraraghavan Rice Aswin Sankaranarayanan CMU John Rogers UIUC

Simulations

noiseless scene 𝜎=0.1 𝜎=0.5 𝜎=1

512

256

256

256

128

128

128

Number of pixels in reconstruction

additive noise

Page 17: Lensless Imaging Richard Baraniuk Rice University Ashok Veeraraghavan Rice Aswin Sankaranarayanan CMU John Rogers UIUC

Spherical Prototype• Sensor: White paint on a spherical shell

acts as diffuser

• Mask: random binary mask 68x68 features (170µm each)10% of mask “pixels” transparent

• Target projected on screen 18cm away (target height/width 24cm)

plastic shell with diffuser inner surface

(proxy for a spherical sensor)

planar mask in front of shell(flexible PDMS masks on shell)

current prototype uses a Grasshopper point grey camera to capture image formed on a 1.2 cm2

area (400x400 pixels)

Page 18: Lensless Imaging Richard Baraniuk Rice University Ashok Veeraraghavan Rice Aswin Sankaranarayanan CMU John Rogers UIUC

targetson screen

recoveredimages

32x32 recoveredimages

Spherical Prototype Results• Sensor: White paint on a spherical

shell acts as diffuser

• Mask: random binary mask 68x68 features (170µm each)10% of mask “pixels” transparent

• Target projected on screen 18cm away (target height/width 24cm)

• Reconstruction: Least-squares

• Ongoing: High-resolution reconstruction leveraging sparsity and priors on scene structure

Spherical photo-detector to improve SNR

Page 19: Lensless Imaging Richard Baraniuk Rice University Ashok Veeraraghavan Rice Aswin Sankaranarayanan CMU John Rogers UIUC

Planned Research

• Radically new kinds of cameras – flat, flexible, (hemi)spherical cameras– beyond visible (IR, THz, …)– numerous potential DOD and industrial applications

• New theory and algorithms– mask design (light throughput versus invertibility)– dynamic masks– new recovery algorithms needed for 0/1 masks

• Can perform exploitation directly on compressive measurements (detection/classification, etc.) without numerical scene reconstruction

• Sensing light fields instead of images