distributed compressive sampling shriramduarte/images/cs_ima_dec5.pdf · cs_ima_dec5_short.ppt...

Post on 26-Aug-2020

0 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Rice Universitydsp.rice.edu/cs

DistributedCompressiveSamplingA Framework forIntegrated Sensing and Processingfor Signal Ensembles

MarcoDuarte

ShriramSarvotham

MichaelWakin

DrorBaron

RichardBaraniuk

DSP Sensing• The typical sensing/compression setup

– compress = transform, sort coefficients, encode– most computation at sensor (asymmetrical)– lots of work to throw away >80% of the coefficients

compress transmit

receive decompress

sample

• Measure projections onto incoherent basis/frame– random “white noise” is universally incoherent

• Reconstruct via nonlinear techniques• Mild oversampling:• Highly asymmetrical (most computation at receiver)

project transmit

receive reconstruct

Compressive Sampling (CS)

CS Reconstruction• Underdetermined• Possible approaches:

• Also: efficient greedy algorithms for sparseapproximation

wrong solution(not sparse)

right solution,but not tractable

right solution and tractableif M > cK (c ~ 3 or 4)

Compressive Sampling• CS changes the rules of the data acquisition game

– changes what we mean by “sampling”– exploits a priori signal/image sparsity information

(that the signal is compressible in some representation)– Related to multiplex sampling (D. Brady - DISP)

• Potential next-generation data acquisition– new distributed source coding algorithms for

multi-sensor applications– new A/D converters (sub Nyquist) [Darpa A2I]

– new mixed-signal analog/digital processing systems– new imagers, imaging, and image processing algorithms– …

• Longer signals via “random” transforms• Non-Gaussian measurement scheme

• Low complexity measurement• (approx O(N) versus O(MN))

– universally incoherent• Low complexity reconstruction

– e.g., Matching Pursuit

– compute using transforms(approx O(N2) versus O(MN2))

Permuted FFT (PFFT)

FastTransform(FFT, DCT,

etc.)

Truncation(keep Mout of N)

PseudorandomPermutation

Reconstruction from PFFTCoefficients

Original65536 pixels

Wavelet Thresholding6500 coefficients

CS Reconstruction26000 measurements

•4x oversampling enables good approximation•Wavelet encoding requires

–extra location encoding + fancy quantization strategy

•Random projection encoding requires –no location encoding + only uniform quantization

Random Filtering[with J. Tropp]

• Hardware/software implementation

• Structure of– convolution Toeplitz/circulant– downsampling keep certain rows– if filter has few taps, is sparse– potential for fast reconstruction

• Can be generalized to analog input

Downsample(keep M out

of N)

“Random” FIR Filter

Time-sparse signals N = 128, K = 10

Fourier-sparse signalsN = 128, K = 10

Rice CS Camera

single photon detector

randompattern onDMD array

(see also Coifman et al.)

imagereconstruction

• Sensor networks:intra-sensor andinter-sensor correlation

dictated by physical phenomena• Can we exploit these to jointly compress?• Popular approach: collaboration

– inter-sensor communication overhead– complexity at sensors

• Ongoing challenge in information theorycommunity

Correlationin SignalEnsembles

Benefits:• Distributed Source Coding:

– exploit intra- and inter-sensorcorrelations

⇒ fewer measurements necessary– zero inter-sensor

communication overhead

DistributedCompressive Sampling (DCS) compressed

data

destination(reconstruct jointly)

•Compressive Sampling:–universality (random projections)–“future-proof”–encryption–robustness to noise, packet loss–scalability–low complexity at sensors

Joint sparsity modelsand algorithms fordifferent physicalsettings

JSM-1: Common + Innovations Model

• Motivation: sampling signals in a smooth field

• Joint sparsity model:– length- sequences and

– is length- common component

– , length- innovation components– has sparsity

– , have sparsity ,

• Measurements

• Intuition: Sensors should be able to “share theburden” of measuring

JSM-2: Common Sparse Supports

• measure J signals, each K-sparse• signals share sparse components,

different coefficients

• Joint sparsity model #3 (JSM-3):– generalization of JSM-1,2: length- sequences⇒ each signal is incompressible– signals may (DCS-2) or may not (DCS-1) share sparse supports

• Intuition: each measurement vector contains clues about thecommon component

JSM-3: Non-Sparse Common Model

DCS Reconstruction• Measure each xj independently with Mj random

projections

• Reconstruct jointly at central receiver

“What is the sparsest joint representation thatcould yield all measurements yj?”

linear programming: use concatenation ofmeasurements yj

greedy pursuit: iteratively select elements ofsupport set

similar to single-sensor case, but more cluesavailable

Theoretical Results

•Mj << (standard CS results); furtherreductions from joint reconstruction

• JSM-1: Slepian-Wolf like bounds for linearprogramming

• JSM-2: c = 1 with greedy algorithm as Jincreases. Can recover with Mj = 1!

• JSM-3: Can measure at Mj = cKj,essentially neglecting z; use iterativeestimation of z and zj. Would otherwiserequire Mj = N!

JSM-1: Recovery via Linear Programming

JSM-2 SOMP ResultsK=5N=50

SeparateJoint

One Signal from Ensemble

Experiment: Sensor Network Light data with JSM-2

JSM-3 ACIE/SOMP Results(same supports)

K=5N=50

Impact of vanishes as

top related