sparse signal recovery: theory, applications and...

45
Sparse Signal Recovery: Theory, Applications and Algorithms Bhaskar Rao Department of Electrical and Computer Engineering University of California, San Diego Collaborators: I. Gorodonitsky, S. Cotter, D. Wipf, J. Palmer, K. Kreutz-Delgado Acknowledgement: Travel support provided by Office of Naval Research Global under Grant Number: N62909-09-1-1024. The research was supported by several grants from the National Science Foundation, most recently CCF - 0830612 Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego () Sparse Signal Recovery: Theory, Applications and Algorithms 1 / 42

Upload: others

Post on 20-Mar-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Sparse Signal Recovery: Theory, Applications

and Algorithms

Bhaskar RaoDepartment of Electrical and Computer Engineering

University of California, San Diego

Collaborators: I. Gorodonitsky, S. Cotter, D. Wipf, J. Palmer, K.Kreutz-DelgadoAcknowledgement: Travel support provided by Office of Naval Research Global under Grant Number: N62909-09-1-1024. The

research was supported by several grants from the National Science Foundation, most recently CCF - 0830612

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 1 / 42

Page 2: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Outline

1 Talk Objective

2 Sparse Signal Recovery Problem

3 Applications

4 Computational Algorithms

5 Performance Evaluation

6 Conclusion

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 2 / 42

Page 3: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Outline

1 Talk Objective

2 Sparse Signal Recovery Problem

3 Applications

4 Computational Algorithms

5 Performance Evaluation

6 Conclusion

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 3 / 42

Page 4: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Importance of Problem

Organized with Prof. Bresler a Special Session at the 1998 IEEEInternational Conference on Acoustics, Speech and Signal Processing

VOLUME I11

DSPl7: DSP EDUCATION DSP Instead of Circuits? - Transition to Undergraduate DSP Education at Rose-Hulman ........................................... 111-1845 W. Padgett, M. Yoder (Rose-Hulman Institute of Tecnology, USA)

A Java Signal Analysis Tool for Signal Processing Experiments .................................... ~.........,..,....,.,..........,..,...,,..........III-l~9 A. Clausen, A. Spanias, A. Xavier, M. Tampi (Arizona State University, USA) Visualization of Signal Processing Concepts ...................................................................................................................... 111-1853 J. Shaffer, J. Hamaker, J. Picone (Mississippi State University, USA) An Experimental Architecture for Interactive Web-based DSP Education .................................................................... 111-1 857 M. Rahkila, M. Karjalainen (Helsinki University of Technology, Finland)

SPEC-DSP: SIGNAL PROCESSING WITH SPARSENESS CONSTRAINT Signal Processing with the Sparseness Constraint ............................................................................................................. 111-1861 B. Rao (University of California, San Diego, USA) Application of Basis Pursuit in Spectrum Estimation ........................................................................................................ 111-1865 S. Chen (IBM, USA); D. Donoho (Stanford University, USA) Parsimony and Wavelet Method for Denoising ................................................................................................................... 111-1869 H. Krim (MIT, USA); J. Pesquet (University Paris Sud, France); I. Schick (GTE Ynternetworking and Harvard Univ., USA) Parsimonious Side Propagation ........................................................................................................................................... 111-1873 P. Bradley, 0. Mangasarian (University of Wisconsin-Madison, USA) Fast Optimal and Suboptimal Algorithms for Sparse Solutions to Linear Inverse Problems ....................................... 111-1877 G. Harikumar (Tellabs Research, USA); C. Couvreur, Y. Bresler (University of Illinois, Urbana-Champaign. USA) Measures and Algorithms for Best Basis Selection ............................................................................................................ 111-1881 K. Kreutz-Delgado, B. Rao (University of Califomia, San Diego, USA)

Sparse Inverse Solution Methods for Signal and Image Processing Applications ........................................................... 111-1885 B. Jeffs (Brigham Young University, USA) Image Denoising Using Multiple Compaction Domains .................................................................................................... 111-1889 P. Ishwar, K. Ratakonda, P. Moulin, N. Ahuja (University of Illinois, Urbana-Champaign, USA)

SPEC-EDUCATION: FUTURE DIRECTIONS IN SIGNAL PROCESSING EDUCATION A Different First Course in Electrical Engineering ........................................................................................................... 111-1893 D. Johnson, J. Wise, Jr. (Rice University, USA) Integrating Engineering Design, Signal Processing, and Community Service in the EPICS Program ......................... 111-1897 L. Jamieson, E. Coyle, M. Haver, E. Delp (Purdue University, USA) An Interactive Learning Environment for Adaptive Systems .......................................................................................... III-1901 J. Principe, N. Euliano (University of Florida, USA); C. Lefebvre (Neurodimension, USA) Interactive DSP Education Using Java ............................................................................................................................... 111-1905 Y. Cheneval, L Balmelli, P. Prandoni (Swiss Federal Institute of Technology, Switzerland); J. Kovacevic (Bell Labs, Lucent Technologies, USA); M. Vetterli (Swiss Federal Institute of Technology, Switzerland)

E. Patterson (Clemson University, USA); D. Wu (Sony Corporation, USA); J. Gowdy (Clemson University, USA) Multi-Platform CBI Tools Using Linux and Java-Based Solutions for Distance Learning ............................................ 111-1909

xxxv

Authorized licensed use limited to: Univ of Calif San Diego. Downloaded on June 13, 2009 at 11:59 from IEEE Xplore. Restrictions apply.

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 4 / 42

Page 5: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Talk Goals

Sparse Signal Recovery is an interesting area with manypotential applications

Tools developed for solving the Sparse Signal Recovery problemare useful for signal processing practitioners to know

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 5 / 42

Page 6: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Outline

1 Talk Objective

2 Sparse Signal Recovery Problem

3 Applications

4 Computational Algorithms

5 Performance Evaluation

6 Conclusion

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 6 / 42

Page 7: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Problem Description

t is N × 1 measurement vector

Φ is N ×M Dictionary matrix. M >> N .

w is M×1 desired vector which is sparse with K non-zero entries

ε is the additive noise modeled as additive white Gaussian

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 7 / 42

Page 8: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Problem Statement

Noise Free Case: Given a target signal t and a dictionary Φ, findthe weights w that solve:

minw

M∑i=1

I (wi 6= 0) such that t = Φw

where I (.) is the indicator function

Noisy Case: Given a target signal t and a dictionary Φ, find theweights w that solve:

minw

M∑i=1

I (wi 6= 0) such that ‖t − Φw‖22 ≤ β

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 8 / 42

Page 9: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Complexity

Search over all possible subsets, which would mean a search overa total of (MCK ) subsets. Problem NP hard. WithM = 30,N = 20, and K = 10 there are 3× 107 subsets (VeryComplex)

A branch and bound algorithm can be used to find the optimalsolution. The space of subsets searched is pruned but the searchmay still be very complex.

Indicator function not continuous and so not amenable tostandard optimization tools.

Challenge: Find low complexity methods with acceptableperformance

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 9 / 42

Page 10: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Outline

1 Talk Objective

2 Sparse Signal Recovery Problem

3 Applications

4 Computational Algorithms

5 Performance Evaluation

6 Conclusion

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 10 / 42

Page 11: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Applications

Signal Representation (Mallat, Coifman, Wickerhauser, Donoho,...)

EEG/MEG (Leahy, Gorodnitsky, Ioannides, ...)

Functional Approximation and Neural Networks (Chen,Natarajan, Cun, Hassibi, ...)

Bandlimited extrapolations and spectral estimation (Papoulis,Lee, Cabrera, Parks, ...)

Speech Coding (Ozawa, Ono, Kroon, Atal, ...)

Sparse channel equalization (Fevrier, Greenstein, Proakis, )

Compressive Sampling (Donoho, Candes, Tao..)

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 11 / 42

Page 12: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

DFT Example

N chosen to be 64 in this example.

Measurement t:

t[n] = cosω0n, n = 0, 1, 2, ..,N − 1

ω0 =2π

64

33

2

Dictionary Elements:

φk = [1, e−jωk , e−j2ωk , .., e−j(N−1)ωk ]T , ωk =2π

M

Consider M = 64, 128, 256 and 512NOTE: The frequency components are included in the dictionary Φfor M = 128, 256, and 512.

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 12 / 42

Page 13: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

FFT Results with Different MFFT Results

M=64 M=128

M=256 M=512

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 13 / 42

Page 14: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Magnetoencephalography (MEG)

Given measured magnetic fields outside of the head, the goal is tolocate the responsible current sources inside the head.

MEG Example♦ Given measured magnetic fields outside of the head, we would

like to locate the responsible current sources inside the head.

= MEG measurement point= Candidate source location

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 14 / 42

Page 15: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

MEG Example

At any given time, typically only a few sources are active (SPARSE).MEG Example♦ At any given time, typically only a few sources are active.

= Candidate source location= MEG measurement point

= Active source location

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 15 / 42

Page 16: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

MEG Formulation

Forming the overcomplete dictionary Φ

The number of rows equals the number of sensors.The number of columns equals the number of possible sourcelocations.Φij = the magnetic field measured at sensor i produced by aunit current at location j .We can compute Φ using a boundary element brain model andMaxwells equations.

Many different combinations of current sources can produce thesame observed magnetic field t.

By finding the sparsest signal representation/basis, we find thesmallest number of sources capable of producing the observedfield.

Such a representation is of neurophysiological significance

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 16 / 42

Page 17: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Compressive Sampling

Transform Coding Compressed SensingΨ w b

Ψ w bA A t

Φ

Compressive Sensing

Compressed SensingΨ w b

Ψ w bA A t

Φ

Computation : t → w → b

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 17 / 42

Page 18: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Compressive Sampling

Transform Coding Compressed SensingΨ w b

Ψ w bA A t

Φ

Compressive Sensing

Compressed SensingΨ w b

Ψ w bA A t

Φ

Computation : t → w → b

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 17 / 42

Page 19: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Compressive Sampling

Transform Coding Compressed SensingΨ w b

Ψ w bA A t

Φ

Compressive Sensing

Compressed SensingΨ w b

Ψ w bA A t

Φ

Computation : t → w → b

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 17 / 42

Page 20: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Compressive Sampling

Transform Coding Compressed SensingΨ w b

Ψ w bA A t

Φ

Compressive Sensing

Compressed SensingΨ w b

Ψ w bA A t

Φ

Computation : t → w → bBhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 17 / 42

Page 21: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Outline

1 Talk Objective

2 Sparse Signal Recovery Problem

3 Applications

4 Computational Algorithms

5 Performance Evaluation

6 Conclusion

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 18 / 42

Page 22: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Potential Approaches

Problem NP hard and so need alternate strategies

Greedy Search Techniques: Matching Pursuit, OrthogonalMatching Pursuit

Minimizing Diversity Measures: Indicator function notcontinuous. Define Surrogate Cost functions that are moretractable and whose minimization leads to sparse solutions, e.g.`1 minimization

Bayesian Methods: Make appropriate Statistical assumptions onthe solution and apply estimation techniques to identify thedesired sparse solution

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 19 / 42

Page 23: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Greedy Search Methods: Matching Pursuit

Select a column that is most aligned with the current residual

0.7       4 ….‐0.3         ….

t Φ w ε

Remove its contribution

Stop when residual becomes small enough or if we haveexceeded some sparsity threshold.

Some Variations

Matching Pursuit [Mallat & Zhang]Orthogonal Matching Pursuit [Pati et al.]Order Recursive Matching Pursuit (ORMP)

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 20 / 42

Page 24: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Inverse techniques

For the systems of equations Φw = t, the solution set ischaracterized by {ws : ws = Φ+t + v , v ∈ N (Φ)}, where N (Φ)denotes the null space of Φ and Φ+ = ΦT (ΦΦT )−1.

Minimum Norm solution : The minimum `2 norm solutionwmn = Φ+t is a popular solution

Noisy Case: regularized `2 norm solution often employed and is givenby

wreg = ΦT (ΦΦT + λI )−1t

Problem: Solution is not Sparse

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 21 / 42

Page 25: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Diversity Measures

Functionals whose minimization leads to sparse solutions

Many examples are found in the fields of economics, socialscience and information theory

These functionals are concave which leads to difficultoptimization problems

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 22 / 42

Page 26: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Examples of Diversity Measures

`(p≤1) Diversity Measure

E (p)(w) =M∑l=1

|wl |p, p ≤ 1

Gaussian Entropy

HG (w) =M∑l=1

ln |wl |2

Shannon Entropy

HS(w) = −M∑l=1

w̃l ln w̃l . where w̃l =w 2

l

‖w‖2

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 23 / 42

Page 27: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Diversity Minimization

Noiseless Case

minw

E (p)(w) =M∑l=1

|wl |p subject to Φw = t

Noisy Case

minw

(‖t − Φw‖2 + λ

M∑l=1

|wl |p)

p = 1 is a very popular choice because of the convex nature of theoptimization problem (Basis Pursuit and Lasso).

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 24 / 42

Page 28: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Bayesian Methods

Maximum Aposteriori Approach (MAP)

Assume a sparsity inducing prior on the latent variable wDevelop an appropriate MAP estimation algorithm

Empirical Bayes

Assume a parameterized prior on the latent variable w(hyperparameters)Marginalize over the latent variable w and estimate thehyperparametersDetermine the posterior distribution of w and obtain a point asthe mean, mode or median of this density

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 25 / 42

Page 29: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Generalized Gaussian Distribution

Density function: Subgaussian: p > 2 and Supergaussian : p < 2

f (x) =p

2σΓ( 1p

)exp

{−(|x |σ

)p}Generalized Gaussian Distribution

-5 0 50

0.5

1

1.5

2

2.5

3

p=0.5p=1p=2p=10

( )( ) exp

2 1

pxpp x

σ σ

⎧ ⎫−⎛ ⎞⎪ ⎪= −⎨ ⎬⎜ ⎟Γ ⎝ ⎠⎪ ⎪⎩ ⎭-p < 2: Super Gaussian

-p > 2: Sub Gaussian

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 26 / 42

Page 30: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Student t Distribution

Density function:

f (x) =Γ(ν+1

2)

√πνΓ(ν

2)

(1 +

x2

ν

)( ν+12 )

Student-t Distribution

12 2

12( ) 1

2

xp x

νν

ν ννπ

+⎛ ⎞−⎜ ⎟⎝ ⎠

+⎛ ⎞Γ⎜ ⎟ ⎛ ⎞⎝ ⎠= +⎜ ⎟⎛ ⎞ ⎝ ⎠Γ⎜ ⎟⎝ ⎠

-5 0 50

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

ν = 1ν = 2ν = 5Gaussian

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 27 / 42

Page 31: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

MAP using a Supergaussian prior

Assuming a Gaussian likelihood model for f (t|w), we can find MAPweight estimates

wMAP = arg maxw

log f (w |t)

= arg maxw

(log f (t|w) + log f (w))

= arg minw

(‖Φw − t‖2 + λ

M∑l=1

|wl |p)

This is essentially a regularized LS framework. Interesting range for pis p ≤ 1.

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 28 / 42

Page 32: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

MAP Estimate: FOCal Underdetermined System

Solver (FOCUSS)

Approach involves solving a sequence of Regularized WeightedMinimum Norm problems

qk+1 = arg minq

(‖Φk+1q − t‖2 + λ‖q‖2

)where Φk+1 = ΦMk+1, and Mk+1 = diag(|wk.l |1−

p2 .

wk+1 = Mk+1qk+1.

p = 0 is the `0 minimization and p = 1 is `1 minimization

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 29 / 42

Page 33: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

FOCUSS summary

For p < 1, the solution is initial condition dependent

Prior knowledge can be incorporatedMinimum norm is a suitable choiceCan retry with several initial conditions

Computationally more complex than Matching Pursuitalgorithms

Sparsity versus tolerance tradeoff more involved

Factor p allows a trade-off between the speed of convergenceand the sparsity obtained

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 30 / 42

Page 34: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Convergence Errors vs. Structural ErrorsConvergence Errors vs. Structural Errors

w0w*

-log

p(w

|t)

w0w*-lo

g p(

w|t)

Convergence Error Structural Error

w* = solution we have converged tow0 = maximally sparse solution

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 31 / 42

Page 35: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Shortcomings of these Methods

p = 1

Basis Pursuit/Lasso often suffer from structural errors.

Therefore, regardless of initialization, we may never find the bestsolution.

p < 1

The FOCUSS class of algorithms suffers from numeroussuboptimal local minima and therefore convergence errors.

In the low noise limit, the number of local minima K satisfies

K ∈[(

M − 1N

)+ 1,

(MN

)]At most local minima, the number of nonzero coefficients isequal to N , the number of rows in the dictionary.

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 32 / 42

Page 36: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Empirical Bayesian Method

Main Steps

Parameterized prior f (w |γ)

Marginalize

f (t|γ) =

∫f (t,w |γ)dw =

∫f (t|w)f (w |γ)dw

Estimate the hyperparameter γ̂

Determine the posterior density of the latent variable f (w |t, γ̂)

Obtain point estimate of w

Example: Sparse Bayesian Learning (SBL by Tipping)

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 33 / 42

Page 37: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Outline

1 Talk Objective

2 Sparse Signal Recovery Problem

3 Applications

4 Computational Algorithms

5 Performance Evaluation

6 Conclusion

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 34 / 42

Page 38: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

DFT ExampleDFT Results Using FOCUSS

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 35 / 42

Page 39: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Empirical Tests

Random overcomplete bases Φ and sparse weight vectors w0

were generated and used to create target signals t, i.e.,t = Φw0 + ε

SBL (Empirical Bayes) was compared with Basis Pursuit andFOCUSS (with various p values) in the task of recovering w0.

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 36 / 42

Page 40: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Experiment 1: Comparison with Noiseless Data

Randomized Φ (20 rows by 40 columns).

Diversity of the true w0 is 7.

Results are from 1000 independent trials.

NOTE: An error occurs whenever an algorithm converges to asolution w not equal to w0.

Experiment I: Noiseless Comparison

♦ Randomized Φ (20 rows by 40 columns).♦ Diversity of the ‘true’ w0 is 7.♦ Results are from 1000 independent trials.

♦ NOTE: An error occurs whenever an algorithm converges to a solution w not equal to w0.

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 37 / 42

Page 41: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Experiment II: Comparison with Noisy Data

Randomized Φ (20 rows by 40 columns).

Diversity of the true w0 is 7.

20 db AWGN

Results are from 1000 independent trials.

NOTE: We no longer distinguish convergence errors from structuralerrors.

Experiment II: Noisy Comparisons

♦ Randomized Φ (20 rows by 40 columns).♦ 20 dB AWGN.♦ Diversity of the ‘true’ w is 7.♦ Results are from 1000 independent trials.

♦ NOTE: We no longer distinguish convergence errors from structural errors.

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 38 / 42

Page 42: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 39 / 42

Page 43: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

MEG Example

Data based on CTF MEG system at UCSF with 275 scalpsensors.

Forward model based on 40, 000 points (vertices of a triangulargrid) and 3 different scale factors.

Dimensions of lead field matrix (dictionary): 275 by 120, 000.

Overcompleteness ratio approximately 436.

Up to 40 unknown dipole components were randomly placedthroughout the sample space.

SBL was able to resolve roughly twice as many dipoles as thenext best method (Ramirez 2005).

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 40 / 42

Page 44: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Outline

1 Talk Objective

2 Sparse Signal Recovery Problem

3 Applications

4 Computational Algorithms

5 Performance Evaluation

6 Conclusion

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 41 / 42

Page 45: Sparse Signal Recovery: Theory, Applications and Algorithmsewh.ieee.org/conf/spawc/2009/plenary_2.pdfOutline 1 Talk Objective 2 Sparse Signal Recovery Problem 3 Applications 4 Computational

Summary

Discussed role of sparseness in linear inverse problems

Discussed Applications of Sparsity

Discussed methods for computing sparse solutions

Matching Pursuit AlgorithmsMAP methods (FOCUSS Algorithm and `1 minimization)Empirical Bayes (Sparse Bayesian Learning (SBL))

Expectation is that there will be continued growth in the applicationdomain as well as in the algorithm development.

Bhaskar RaoDepartment of Electrical and Computer EngineeringUniversity of California, San Diego ()Sparse Signal Recovery: Theory, Applications and Algorithms 42 / 42