random matrix theory for signal processing applications · random matrix theory for signal...

192
Random Matrix Theory for Signal Processing Applications Romain Couillet 1 ,M´ erouane Debbah 2 1 EDF Chair on System Sciences and the Energy Challenge, Sup ´ elec, Gif sur Yvette, France 2 Alcatel-Lucent Chair on Flexible Radio, Sup´ elec, Gif sur Yvette, FRANCE {romain.couillet,merouane.debbah}@supelec.fr ICASSP 2011, Prague, Czech Republic. R. Couillet (Sup ´ elec) Random Matrix Theory for Signal Processing Applications 22/05/2011 1 / 102

Upload: others

Post on 07-Jun-2020

9 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory for Signal Processing Applications

Romain Couillet1, Merouane Debbah2

1EDF Chair on System Sciences and the Energy Challenge, Supelec, Gif sur Yvette, France2Alcatel-Lucent Chair on Flexible Radio, Supelec, Gif sur Yvette, FRANCE

{romain.couillet,merouane.debbah}@supelec.fr

ICASSP 2011, Prague, Czech Republic.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 1 / 102

Page 2: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Outline

1 Tools for Random Matrix TheoryClassical Random Matrix TheoryIntroduction to Large Dimensional Random Matrix TheoryThe Random Matrix PioneersThe Moment Approach and Free ProbabilityIntroduction of the Stieltjes TransformProperties of the Asymptotic Support and Spiked ModelsSummary of what we know and what is left to be done

2 Random Matrix Theory and Signal Source SensingSmall Dimensional AnalysisLarge Dimensional Random Matrix Analysis

3 Random Matrix Theory and Multi-Source Power EstimationOptimal detectorThe moment methodThe Stieltjes transform method

4 Random Matrix Theory and Failure Detection in Complex SystemsRandom matrix models of local failures in sensor networksFailure detection and localization

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 2 / 102

Page 3: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory

Outline

1 Tools for Random Matrix TheoryClassical Random Matrix TheoryIntroduction to Large Dimensional Random Matrix TheoryThe Random Matrix PioneersThe Moment Approach and Free ProbabilityIntroduction of the Stieltjes TransformProperties of the Asymptotic Support and Spiked ModelsSummary of what we know and what is left to be done

2 Random Matrix Theory and Signal Source SensingSmall Dimensional AnalysisLarge Dimensional Random Matrix Analysis

3 Random Matrix Theory and Multi-Source Power EstimationOptimal detectorThe moment methodThe Stieltjes transform method

4 Random Matrix Theory and Failure Detection in Complex SystemsRandom matrix models of local failures in sensor networksFailure detection and localization

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 3 / 102

Page 4: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Classical Random Matrix Theory

Outline

1 Tools for Random Matrix TheoryClassical Random Matrix TheoryIntroduction to Large Dimensional Random Matrix TheoryThe Random Matrix PioneersThe Moment Approach and Free ProbabilityIntroduction of the Stieltjes TransformProperties of the Asymptotic Support and Spiked ModelsSummary of what we know and what is left to be done

2 Random Matrix Theory and Signal Source SensingSmall Dimensional AnalysisLarge Dimensional Random Matrix Analysis

3 Random Matrix Theory and Multi-Source Power EstimationOptimal detectorThe moment methodThe Stieltjes transform method

4 Random Matrix Theory and Failure Detection in Complex SystemsRandom matrix models of local failures in sensor networksFailure detection and localization

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 4 / 102

Page 5: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Classical Random Matrix Theory

Definitions

Random Matrix

A random matrix is a matrix X ∈ CN×n with random entries Xij following a given probabilitydistribution.

In many problems (with symmetrical structures), interest is on:eigenvalue distributioneigenvector projections.

Pioneering works due to Wishart on matrices

XXH

with Xij ∼ CN (0, 1)

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 5 / 102

Page 6: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Classical Random Matrix Theory

Definitions

Random Matrix

A random matrix is a matrix X ∈ CN×n with random entries Xij following a given probabilitydistribution.

In many problems (with symmetrical structures), interest is on:eigenvalue distributioneigenvector projections.

Pioneering works due to Wishart on matrices

XXH

with Xij ∼ CN (0, 1)

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 5 / 102

Page 7: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Classical Random Matrix Theory

Definitions

Random Matrix

A random matrix is a matrix X ∈ CN×n with random entries Xij following a given probabilitydistribution.

In many problems (with symmetrical structures), interest is on:eigenvalue distributioneigenvector projections.

Pioneering works due to Wishart on matrices

XXH

with Xij ∼ CN (0, 1)

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 5 / 102

Page 8: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Classical Random Matrix Theory

Wishart matrices

J. Wishart, “The generalized product moment distribution in samples from a normal multivariatepopulation”, Biometrika, vol. 20A, pp. 32-52, 1928.

Wishart describes the distribution of Rn = XXH =∑n

i=1 xi xHi , xi ∈ CN ∼ CN (0,R),

PRn (B) =πN(N−1)/2

det Rn∏N

i=1(n − i)!e− tr(R−1B) det Bn−N

Joint and marginal eigenvalue distributions:

P(λi )(λ1, . . . , λN ) =

det({e−r−1j λi }N )

∆(R−1)∆(L)

N∏j=1

λn−Nj

j!(n − j)!

with r1 ≥ . . . ≥ rN the eigenvalues of R and L = diag(λ1 ≥ . . . ≥ λN ) and

pλ(λ) =1M

N−1∑k=0

k!

(k + n − N)![Ln−N

k ]2λn−Ne−λ

where Lkn are the Laguerre polynomials

Lkn(λ) =

k!λn

dk

dλk(e−λλn+k ).

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 6 / 102

Page 9: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Classical Random Matrix Theory

Extension to more generic matrices

T. Ratnarajah and R. Vaillancourt and M. Alvo, “Eigenvalues and condition numbers of complexrandom matrices,” SIAM Journal on Matrix Analysis and Applications, vol. 26, no. 2, pp. 441-456,2005.

Extensions to:correlated Gaussian involve heavy tools (Schur polynomials)non-Gaussian is virtually impossible!

Solution is to assume increasing matrix dimensions: N, n→∞deterministic limiting behaviour is often observedloose assumptions on entry distributions (e.g. rotational symmetry, independent entries)robust framework for very generic models are known:

Stieltjes transform methods (more efficient than Fourier transform)moments/free probability methods (extension of classical probability for non-commutative variables)physical methods for large systems (replica method)

This tutorial will introduce the major used methods but concentrates on the powerful Stieltjestransform method.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 7 / 102

Page 10: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Classical Random Matrix Theory

Extension to more generic matrices

T. Ratnarajah and R. Vaillancourt and M. Alvo, “Eigenvalues and condition numbers of complexrandom matrices,” SIAM Journal on Matrix Analysis and Applications, vol. 26, no. 2, pp. 441-456,2005.

Extensions to:correlated Gaussian involve heavy tools (Schur polynomials)non-Gaussian is virtually impossible!

Solution is to assume increasing matrix dimensions: N, n→∞deterministic limiting behaviour is often observedloose assumptions on entry distributions (e.g. rotational symmetry, independent entries)robust framework for very generic models are known:

Stieltjes transform methods (more efficient than Fourier transform)moments/free probability methods (extension of classical probability for non-commutative variables)physical methods for large systems (replica method)

This tutorial will introduce the major used methods but concentrates on the powerful Stieltjestransform method.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 7 / 102

Page 11: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Classical Random Matrix Theory

Extension to more generic matrices

T. Ratnarajah and R. Vaillancourt and M. Alvo, “Eigenvalues and condition numbers of complexrandom matrices,” SIAM Journal on Matrix Analysis and Applications, vol. 26, no. 2, pp. 441-456,2005.

Extensions to:correlated Gaussian involve heavy tools (Schur polynomials)non-Gaussian is virtually impossible!

Solution is to assume increasing matrix dimensions: N, n→∞deterministic limiting behaviour is often observedloose assumptions on entry distributions (e.g. rotational symmetry, independent entries)robust framework for very generic models are known:

Stieltjes transform methods (more efficient than Fourier transform)moments/free probability methods (extension of classical probability for non-commutative variables)physical methods for large systems (replica method)

This tutorial will introduce the major used methods but concentrates on the powerful Stieltjestransform method.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 7 / 102

Page 12: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Classical Random Matrix Theory

Extension to more generic matrices

T. Ratnarajah and R. Vaillancourt and M. Alvo, “Eigenvalues and condition numbers of complexrandom matrices,” SIAM Journal on Matrix Analysis and Applications, vol. 26, no. 2, pp. 441-456,2005.

Extensions to:correlated Gaussian involve heavy tools (Schur polynomials)non-Gaussian is virtually impossible!

Solution is to assume increasing matrix dimensions: N, n→∞deterministic limiting behaviour is often observedloose assumptions on entry distributions (e.g. rotational symmetry, independent entries)robust framework for very generic models are known:

Stieltjes transform methods (more efficient than Fourier transform)moments/free probability methods (extension of classical probability for non-commutative variables)physical methods for large systems (replica method)

This tutorial will introduce the major used methods but concentrates on the powerful Stieltjestransform method.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 7 / 102

Page 13: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Classical Random Matrix Theory

Extension to more generic matrices

T. Ratnarajah and R. Vaillancourt and M. Alvo, “Eigenvalues and condition numbers of complexrandom matrices,” SIAM Journal on Matrix Analysis and Applications, vol. 26, no. 2, pp. 441-456,2005.

Extensions to:correlated Gaussian involve heavy tools (Schur polynomials)non-Gaussian is virtually impossible!

Solution is to assume increasing matrix dimensions: N, n→∞deterministic limiting behaviour is often observedloose assumptions on entry distributions (e.g. rotational symmetry, independent entries)robust framework for very generic models are known:

Stieltjes transform methods (more efficient than Fourier transform)moments/free probability methods (extension of classical probability for non-commutative variables)physical methods for large systems (replica method)

This tutorial will introduce the major used methods but concentrates on the powerful Stieltjestransform method.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 7 / 102

Page 14: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Introduction to Large Dimensional Random Matrix Theory

Outline

1 Tools for Random Matrix TheoryClassical Random Matrix TheoryIntroduction to Large Dimensional Random Matrix TheoryThe Random Matrix PioneersThe Moment Approach and Free ProbabilityIntroduction of the Stieltjes TransformProperties of the Asymptotic Support and Spiked ModelsSummary of what we know and what is left to be done

2 Random Matrix Theory and Signal Source SensingSmall Dimensional AnalysisLarge Dimensional Random Matrix Analysis

3 Random Matrix Theory and Multi-Source Power EstimationOptimal detectorThe moment methodThe Stieltjes transform method

4 Random Matrix Theory and Failure Detection in Complex SystemsRandom matrix models of local failures in sensor networksFailure detection and localization

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 8 / 102

Page 15: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Introduction to Large Dimensional Random Matrix Theory

Large dimensional data

Let w1,w2 . . . ∈ CN be independently drawn from an N-variate process of mean zero andcovariance R = E[w1wH

1 ] ∈ CN×N .

Law of large numbers

As n→∞,1n

n∑i=1

wi wHi = WWH a.s.−→ R

In reality, one cannot afford n→∞.

if n� N,

Rn =1n

n∑i=1

wi wHi

is a “good” estimate of R.

if N/n = O(1), and if both (n,N) are large, we can still say, for all (i, j),

(Rn)ija.s.−→ (R)ij

What about the global behaviour? What about the eigenvalue distribution?

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 9 / 102

Page 16: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Introduction to Large Dimensional Random Matrix Theory

Large dimensional data

Let w1,w2 . . . ∈ CN be independently drawn from an N-variate process of mean zero andcovariance R = E[w1wH

1 ] ∈ CN×N .

Law of large numbers

As n→∞,1n

n∑i=1

wi wHi = WWH a.s.−→ R

In reality, one cannot afford n→∞.

if n� N,

Rn =1n

n∑i=1

wi wHi

is a “good” estimate of R.

if N/n = O(1), and if both (n,N) are large, we can still say, for all (i, j),

(Rn)ija.s.−→ (R)ij

What about the global behaviour? What about the eigenvalue distribution?

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 9 / 102

Page 17: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Introduction to Large Dimensional Random Matrix Theory

Large dimensional data

Let w1,w2 . . . ∈ CN be independently drawn from an N-variate process of mean zero andcovariance R = E[w1wH

1 ] ∈ CN×N .

Law of large numbers

As n→∞,1n

n∑i=1

wi wHi = WWH a.s.−→ R

In reality, one cannot afford n→∞.

if n� N,

Rn =1n

n∑i=1

wi wHi

is a “good” estimate of R.

if N/n = O(1), and if both (n,N) are large, we can still say, for all (i, j),

(Rn)ija.s.−→ (R)ij

What about the global behaviour? What about the eigenvalue distribution?

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 9 / 102

Page 18: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Introduction to Large Dimensional Random Matrix Theory

Large dimensional data

Let w1,w2 . . . ∈ CN be independently drawn from an N-variate process of mean zero andcovariance R = E[w1wH

1 ] ∈ CN×N .

Law of large numbers

As n→∞,1n

n∑i=1

wi wHi = WWH a.s.−→ R

In reality, one cannot afford n→∞.

if n� N,

Rn =1n

n∑i=1

wi wHi

is a “good” estimate of R.

if N/n = O(1), and if both (n,N) are large, we can still say, for all (i, j),

(Rn)ija.s.−→ (R)ij

What about the global behaviour? What about the eigenvalue distribution?

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 9 / 102

Page 19: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Introduction to Large Dimensional Random Matrix Theory

Empirical and limit spectra of Wishart matrices

0 0.5 1 1.5 2 2.5 30

0.2

0.4

0.6

0.8

Eigenvalues of Rn

Den

sity

Empirical eigenvalue distribution

Marcenko-Pastur Law

Figure: Histogram of the eigenvalues of Rn for n = 2000, N = 500, R = IN

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 10 / 102

Page 20: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Introduction to Large Dimensional Random Matrix Theory

The Marcenko-Pastur Law

0 0.5 1 1.5 2 2.5 30

0.2

0.4

0.6

0.8

1

1.2

x

Den

sity

f c(x

)

c = 0.1

c = 0.2

c = 0.5

Figure: Marcenko-Pastur law for different limit ratios c = lim N/n.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 11 / 102

Page 21: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Introduction to Large Dimensional Random Matrix Theory

The Marcenko-Pastur law

Let W ∈ CN×n have i.i.d. elements, of zero mean and variance 1/n.Eigenvalues of the matrix

n

WH

︸ ︷︷ ︸

N

W

when N, n→∞ with N/n→ c IS NOT IDENTITY!

Remark: If the entries are Gaussian, the matrix is called a Wishart matrix with n degrees offreedom. The exact distribution is known in the finite case.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 12 / 102

Page 22: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Introduction to Large Dimensional Random Matrix Theory

Deriving the Marcenko-Pastur law

We wish to determine the density fc(λ) of the asymptotic law, defined by

fc(λ) = limN→∞n→∞

N/n→c

N∑i=1

δ (λ− λi (Rn))

With N/n→ c, the moments of this distribution are given by

MN1 =

1N

tr Rn =1N

N∑i=1

λi (Rn)→∫λfc(λ)dλ = 1

MN2 =

1N

tr R2n =

1N

N∑i=1

λi (Rn)2 →∫λ2fc(λ)dλ = 1 + c

MN3 =

1N

tr R3n =

1N

N∑i=1

λi (Rn)3 →∫λ3fc(λ)dλ = c2 + 3c + 1

· · · = · · ·

These moments correspond to a unique distribution function (under mild assumptions), whichhas density the Marcenko-Pastur law

f (x) = (1−1c

)+δ(x) +

√(x − a)+(b − x)+

2πcx, with a = (1−

√c)2, b = (1 +

√c)2.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 13 / 102

Page 23: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Introduction to Large Dimensional Random Matrix Theory

Deriving the Marcenko-Pastur law

We wish to determine the density fc(λ) of the asymptotic law, defined by

fc(λ) = limN→∞n→∞

N/n→c

N∑i=1

δ (λ− λi (Rn))

With N/n→ c, the moments of this distribution are given by

MN1 =

1N

tr Rn =1N

N∑i=1

λi (Rn)→∫λfc(λ)dλ = 1

MN2 =

1N

tr R2n =

1N

N∑i=1

λi (Rn)2 →∫λ2fc(λ)dλ = 1 + c

MN3 =

1N

tr R3n =

1N

N∑i=1

λi (Rn)3 →∫λ3fc(λ)dλ = c2 + 3c + 1

· · · = · · ·

These moments correspond to a unique distribution function (under mild assumptions), whichhas density the Marcenko-Pastur law

f (x) = (1−1c

)+δ(x) +

√(x − a)+(b − x)+

2πcx, with a = (1−

√c)2, b = (1 +

√c)2.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 13 / 102

Page 24: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory The Random Matrix Pioneers

Outline

1 Tools for Random Matrix TheoryClassical Random Matrix TheoryIntroduction to Large Dimensional Random Matrix TheoryThe Random Matrix PioneersThe Moment Approach and Free ProbabilityIntroduction of the Stieltjes TransformProperties of the Asymptotic Support and Spiked ModelsSummary of what we know and what is left to be done

2 Random Matrix Theory and Signal Source SensingSmall Dimensional AnalysisLarge Dimensional Random Matrix Analysis

3 Random Matrix Theory and Multi-Source Power EstimationOptimal detectorThe moment methodThe Stieltjes transform method

4 Random Matrix Theory and Failure Detection in Complex SystemsRandom matrix models of local failures in sensor networksFailure detection and localization

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 14 / 102

Page 25: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory The Random Matrix Pioneers

Wigner and semi-circle law

Schrodinger’s equationHΦi = EiΦi

where Φi is the wave function,Ei is the energy level,H is the Hamiltonian.

Magnetic interactions between the spins of electrons

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 15 / 102

Page 26: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory The Random Matrix Pioneers

The birth of large dimensional random matrix theory

Eugene Paul Wigner, 1902-1995

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 16 / 102

Page 27: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory The Random Matrix Pioneers

The birth of large dimensional random matrix theory

E. Wigner, “Characteristic vectors of bordered matrices with infinite dimensions,” The annals ofmathematics, vol. 62, pp. 546-564, 1955.

XN =1√

N

0 +1 +1 +1 −1 −1 · · ·+1 0 −1 +1 +1 +1 · · ·+1 −1 0 +1 +1 +1 · · ·+1 +1 +1 0 +1 +1 · · ·−1 +1 +1 +1 0 −1 · · ·−1 +1 +1 +1 −1 0 · · ·...

......

......

.... . .

As the matrix dimension increases, what can we say about the eigenvalues (energy levels)?

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 17 / 102

Page 28: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory The Random Matrix Pioneers

Semi-circle law, Full circle law...

If XN ∈ CN×N is Hermitian with i.i.d. entries of mean 0, variance 1/N above the diagonal,then F XN

a.s.−→ F where F has density f the semi-circle law

f (x) =1

√(4− x2)+

Shown from the method of moments

limN→∞

1N

tr X2kN =

1k + 1

C2kk

which are exactly the moments of f (x)!

If XN ∈ CN×N has i.i.d. 0 mean, variance 1/N entries, then asymptotically its complexeigenvalues distribute uniformly on the complex unit circle.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 18 / 102

Page 29: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory The Random Matrix Pioneers

Semi-circle law, Full circle law...

If XN ∈ CN×N is Hermitian with i.i.d. entries of mean 0, variance 1/N above the diagonal,then F XN

a.s.−→ F where F has density f the semi-circle law

f (x) =1

√(4− x2)+

Shown from the method of moments

limN→∞

1N

tr X2kN =

1k + 1

C2kk

which are exactly the moments of f (x)!

If XN ∈ CN×N has i.i.d. 0 mean, variance 1/N entries, then asymptotically its complexeigenvalues distribute uniformly on the complex unit circle.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 18 / 102

Page 30: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory The Random Matrix Pioneers

Semi-circle law

−3 −2 −1 0 1 2 30

0.1

0.2

0.3

0.4

Eigenvalues

Den

sity

Empirical eigenvalue distribution

Semi-circle Law

Figure: Histogram of the eigenvalues of Wigner matrices and the semi-circle law, for N = 500

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 19 / 102

Page 31: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory The Random Matrix Pioneers

Circular law

−1 −0.5 0 0.5 1

−1

−0.5

0

0.5

1

Eigenvalues (real part)

Eig

enva

lues

(imag

inar

ypa

rt)

Empirical eigenvalue distribution

Circular Law

Figure: Eigenvalues of XN with i.i.d. standard Gaussian entries, for N = 500.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 20 / 102

Page 32: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory The Random Matrix Pioneers

More involved matrix models

much study has surrounded the Marcenko-Pastur law, the Wigner semi-circle law etc.for practical purposes, we often need more general matrix models

products and sums of random matricesi.i.d. models with correlation/variance profiledistribution of inverses etc.

for these models, it is often impossible to have a closed-form expression of the limitingdistribution.

sometimes we do not have a limiting convergence.

To study these models, the method of moments is not enough!A consistent powerful mathematical framework is required.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 21 / 102

Page 33: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory The Random Matrix Pioneers

More involved matrix models

much study has surrounded the Marcenko-Pastur law, the Wigner semi-circle law etc.for practical purposes, we often need more general matrix models

products and sums of random matricesi.i.d. models with correlation/variance profiledistribution of inverses etc.

for these models, it is often impossible to have a closed-form expression of the limitingdistribution.

sometimes we do not have a limiting convergence.

To study these models, the method of moments is not enough!A consistent powerful mathematical framework is required.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 21 / 102

Page 34: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory The Random Matrix Pioneers

More involved matrix models

much study has surrounded the Marcenko-Pastur law, the Wigner semi-circle law etc.for practical purposes, we often need more general matrix models

products and sums of random matricesi.i.d. models with correlation/variance profiledistribution of inverses etc.

for these models, it is often impossible to have a closed-form expression of the limitingdistribution.

sometimes we do not have a limiting convergence.

To study these models, the method of moments is not enough!A consistent powerful mathematical framework is required.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 21 / 102

Page 35: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory The Random Matrix Pioneers

More involved matrix models

much study has surrounded the Marcenko-Pastur law, the Wigner semi-circle law etc.for practical purposes, we often need more general matrix models

products and sums of random matricesi.i.d. models with correlation/variance profiledistribution of inverses etc.

for these models, it is often impossible to have a closed-form expression of the limitingdistribution.

sometimes we do not have a limiting convergence.

To study these models, the method of moments is not enough!A consistent powerful mathematical framework is required.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 21 / 102

Page 36: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory The Moment Approach and Free Probability

Outline

1 Tools for Random Matrix TheoryClassical Random Matrix TheoryIntroduction to Large Dimensional Random Matrix TheoryThe Random Matrix PioneersThe Moment Approach and Free ProbabilityIntroduction of the Stieltjes TransformProperties of the Asymptotic Support and Spiked ModelsSummary of what we know and what is left to be done

2 Random Matrix Theory and Signal Source SensingSmall Dimensional AnalysisLarge Dimensional Random Matrix Analysis

3 Random Matrix Theory and Multi-Source Power EstimationOptimal detectorThe moment methodThe Stieltjes transform method

4 Random Matrix Theory and Failure Detection in Complex SystemsRandom matrix models of local failures in sensor networksFailure detection and localization

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 22 / 102

Page 37: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory The Moment Approach and Free Probability

Eigenvalue distribution and moments

The Hermitian matrix RN ∈ CN×N has successive empirical moments MNk , k = 1, 2, . . .,

MNk =

1N

N∑i=1

λki

In classical probability theory, for A, B independent,

ck (A + B) = ck (A) + ck (B)

with ck (X) the cumulants of X . The cumulants ck are connected to the moments mk by,

mk =∑

π∈P(k)

∏V∈π

c|V |

A natural extension of classical probability for non-commutative random variables exist, called

Free Probability

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 23 / 102

Page 38: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory The Moment Approach and Free Probability

Eigenvalue distribution and moments

The Hermitian matrix RN ∈ CN×N has successive empirical moments MNk , k = 1, 2, . . .,

MNk =

1N

N∑i=1

λki

In classical probability theory, for A, B independent,

ck (A + B) = ck (A) + ck (B)

with ck (X) the cumulants of X . The cumulants ck are connected to the moments mk by,

mk =∑

π∈P(k)

∏V∈π

c|V |

A natural extension of classical probability for non-commutative random variables exist, called

Free Probability

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 23 / 102

Page 39: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory The Moment Approach and Free Probability

Eigenvalue distribution and moments

The Hermitian matrix RN ∈ CN×N has successive empirical moments MNk , k = 1, 2, . . .,

MNk =

1N

N∑i=1

λki

In classical probability theory, for A, B independent,

ck (A + B) = ck (A) + ck (B)

with ck (X) the cumulants of X . The cumulants ck are connected to the moments mk by,

mk =∑

π∈P(k)

∏V∈π

c|V |

A natural extension of classical probability for non-commutative random variables exist, called

Free Probability

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 23 / 102

Page 40: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory The Moment Approach and Free Probability

Free probability

Free probability applies to asymptotically large random matrices. We denote the moments withoutsuperscript.

To connect the moments of A + B to those of A and B, independence is not enough. A and Bmust be asymptotically free,

two Gaussian matrices are freea Gaussian matrix and any deterministic matrix are freeunitary (Haar distributed) matrices are freea Haar matrix and a Gaussian matrix are free etc.

Similarly as in classical probability, we define free cumulants Ck ,

C1 = M1

C2 = M2 −M21

C3 = M3 − 3M1M2 + 2M21

R. Speicher, “Combinatorial theory of the free product with amalgamation and operator-valuedfree probability theory,” Mem. A.M.S., vol. 627, 1998.

Combinatorial description by non-crossing partitions,

Mn =∑

π∈NC(n)

∏V∈π

C|V |

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 24 / 102

Page 41: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory The Moment Approach and Free Probability

Free probability

Free probability applies to asymptotically large random matrices. We denote the moments withoutsuperscript.

To connect the moments of A + B to those of A and B, independence is not enough. A and Bmust be asymptotically free,

two Gaussian matrices are freea Gaussian matrix and any deterministic matrix are freeunitary (Haar distributed) matrices are freea Haar matrix and a Gaussian matrix are free etc.

Similarly as in classical probability, we define free cumulants Ck ,

C1 = M1

C2 = M2 −M21

C3 = M3 − 3M1M2 + 2M21

R. Speicher, “Combinatorial theory of the free product with amalgamation and operator-valuedfree probability theory,” Mem. A.M.S., vol. 627, 1998.

Combinatorial description by non-crossing partitions,

Mn =∑

π∈NC(n)

∏V∈π

C|V |

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 24 / 102

Page 42: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory The Moment Approach and Free Probability

Free probability

Free probability applies to asymptotically large random matrices. We denote the moments withoutsuperscript.

To connect the moments of A + B to those of A and B, independence is not enough. A and Bmust be asymptotically free,

two Gaussian matrices are freea Gaussian matrix and any deterministic matrix are freeunitary (Haar distributed) matrices are freea Haar matrix and a Gaussian matrix are free etc.

Similarly as in classical probability, we define free cumulants Ck ,

C1 = M1

C2 = M2 −M21

C3 = M3 − 3M1M2 + 2M21

R. Speicher, “Combinatorial theory of the free product with amalgamation and operator-valuedfree probability theory,” Mem. A.M.S., vol. 627, 1998.

Combinatorial description by non-crossing partitions,

Mn =∑

π∈NC(n)

∏V∈π

C|V |

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 24 / 102

Page 43: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory The Moment Approach and Free Probability

Non-crossing partitions

1

2

3

4

5

6

7

8

Figure: Non-crossing partition π = {{1, 3, 4}, {2}, {5, 6, 7}, {8}} of NC(8).

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 25 / 102

Page 44: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory The Moment Approach and Free Probability

Moments of sums and products of random matrices

Combinatorial calculus of all moments

Theorem

For free random matrices A and B, we have the relationship,

Ck (A + B) = Ck (A) + Ck (B)

Mn(AB) =∑

(π1,π2)∈NC(n)

∏V1∈π1V2∈π2

C|V1|(A)C|V2|(B)

in conjunction with free moment-cumulant formula, gives all moments of sum and product.

Theorem

If F is a compactly supported distribution function, then F is determined by its moments.

In the absence of support compactness, some conditions (e.g. Carleman) have to bechecked. This is in particular the case of Vandermonde matrices.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 26 / 102

Page 45: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory The Moment Approach and Free Probability

Moments of sums and products of random matrices

Combinatorial calculus of all moments

Theorem

For free random matrices A and B, we have the relationship,

Ck (A + B) = Ck (A) + Ck (B)

Mn(AB) =∑

(π1,π2)∈NC(n)

∏V1∈π1V2∈π2

C|V1|(A)C|V2|(B)

in conjunction with free moment-cumulant formula, gives all moments of sum and product.

Theorem

If F is a compactly supported distribution function, then F is determined by its moments.

In the absence of support compactness, some conditions (e.g. Carleman) have to bechecked. This is in particular the case of Vandermonde matrices.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 26 / 102

Page 46: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory The Moment Approach and Free Probability

Free convolution

In classical probability theory, for independent A, B,

µA+B(x) = µA(x) ∗ µB(x)∆=

∫µA(t)µB(x − t)dt

In free probability, for free A, B, we use the notations

µA+B = µA � µB, µA = µA+B � µB, µAB = µA � µB, µA = µA+B � µB

Ø. Ryan, M. Debbah, “Multiplicative free convolution and information-plus-noise type matrices,”Arxiv preprint math.PR/0702342, 2007.

Theorem

Convolution of the information-plus-noise model Let WN ∈ CN×n have i.i.d. Gaussian entries ofmean 0 and variance 1, AN ∈ CN×n, such that µ 1

n AN AHN⇒ µA, as n/N → c. Then the eigenvalue

distribution ofBN =

1n

(AN + σWN ) (AN + σWN )H

converges weakly and almost surely to µB such that

µB =((µA � µc) � δσ2

)� µc

with µc the Marcenko-Pastur law with ratio c.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 27 / 102

Page 47: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory The Moment Approach and Free Probability

Free convolution

In classical probability theory, for independent A, B,

µA+B(x) = µA(x) ∗ µB(x)∆=

∫µA(t)µB(x − t)dt

In free probability, for free A, B, we use the notations

µA+B = µA � µB, µA = µA+B � µB, µAB = µA � µB, µA = µA+B � µB

Ø. Ryan, M. Debbah, “Multiplicative free convolution and information-plus-noise type matrices,”Arxiv preprint math.PR/0702342, 2007.

Theorem

Convolution of the information-plus-noise model Let WN ∈ CN×n have i.i.d. Gaussian entries ofmean 0 and variance 1, AN ∈ CN×n, such that µ 1

n AN AHN⇒ µA, as n/N → c. Then the eigenvalue

distribution ofBN =

1n

(AN + σWN ) (AN + σWN )H

converges weakly and almost surely to µB such that

µB =((µA � µc) � δσ2

)� µc

with µc the Marcenko-Pastur law with ratio c.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 27 / 102

Page 48: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory The Moment Approach and Free Probability

Similarities between classical and free probability

Classical Probability Free probability

Moments mk =

∫xk dF (x) Mk =

∫xk dF (x)

Cumulants mn =∑

π∈P(n)

∏V∈π

c|V | Mn =∑

π∈NC(n)

∏V∈π

C|V |

Independence classical independence freenessAdditive convolution fA+B = fA ∗ fB µA+B = µA � µB

Multiplicative convolution fAB µAB = µA � µBSum Rule ck (A + B) = ck (A) + ck (B) Ck (A + B) = Ck (A) + Ck (B)

Central Limit1√

n

n∑i=1

xi → N (0, 1)1√

n

n∑i=1

Xi ⇒ semi-circle law

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 28 / 102

Page 49: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory The Moment Approach and Free Probability

Bibliography on Free Probability related work

D. Voiculescu, “Addition of certain non-commuting random variables,” Journal of functionalanalysis, vol. 66, no. 3, pp. 323-346, 1986.

R. Speicher, “Combinatorial theory of the free product with amalgamation andoperator-valued free probability theory,” Mem. A.M.S., vol. 627, 1998.

R. Seroul, D. O’Shea, “Programming for Mathematicians,” Springer, 2000.

H. Bercovici, V. Pata, “The law of large numbers for free identically distributed randomvariables,” The Annals of Probability, pp. 453-465, 1996.

A. Nica, R. Speicher, “On the multiplication of free N-tuples of noncommutative randomvariables,” American Journal of Mathematics, pp. 799-837, 1996.

Ø. Ryan, M. Debbah, “Multiplicative free convolution and information-plus-noise typematrices,” Arxiv preprint math.PR/0702342, 2007.

N. R. Rao, A. Edelman, “The polynomial method for random matrices,” Foundations ofComputational Mathematics, vol. 8, no. 6, pp. 649-702, 2008.

Ø. Ryan, M. Debbah, “Asymptotic Behavior of Random Vandermonde Matrices With Entrieson the Unit Circle,” IEEE Trans. on Information Theory, vol. 55, no. 7, pp. 3115-3147, 2009.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 29 / 102

Page 50: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Introduction of the Stieltjes Transform

Outline

1 Tools for Random Matrix TheoryClassical Random Matrix TheoryIntroduction to Large Dimensional Random Matrix TheoryThe Random Matrix PioneersThe Moment Approach and Free ProbabilityIntroduction of the Stieltjes TransformProperties of the Asymptotic Support and Spiked ModelsSummary of what we know and what is left to be done

2 Random Matrix Theory and Signal Source SensingSmall Dimensional AnalysisLarge Dimensional Random Matrix Analysis

3 Random Matrix Theory and Multi-Source Power EstimationOptimal detectorThe moment methodThe Stieltjes transform method

4 Random Matrix Theory and Failure Detection in Complex SystemsRandom matrix models of local failures in sensor networksFailure detection and localization

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 30 / 102

Page 51: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Introduction of the Stieltjes Transform

The Stieltjes transform

Definition

Let F be a real distribution function. The Stieltjes transform mF of F is the function defined, forz ∈ C \ R, as

mF (z) =

∫1

λ− zdF (λ)

For a < b real, denoting z = x + iy , we have the inverse formula

F ′(x) = limy→0

1π=[mF (x + iy)]

Knowing the Stieltjes transform is knowing the eigenvalue distribution!

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 31 / 102

Page 52: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Introduction of the Stieltjes Transform

The Stieltjes transform

Definition

Let F be a real distribution function. The Stieltjes transform mF of F is the function defined, forz ∈ C \ R, as

mF (z) =

∫1

λ− zdF (λ)

For a < b real, denoting z = x + iy , we have the inverse formula

F ′(x) = limy→0

1π=[mF (x + iy)]

Knowing the Stieltjes transform is knowing the eigenvalue distribution!

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 31 / 102

Page 53: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Introduction of the Stieltjes Transform

Remark on the Stieltjes transform

If F is the eigenvalue distribution of a Hermitian matrix XN ∈ CN×N , we might denote

mX∆=mF , and

mX(z) =

∫1

λ− zdF (λ) =

1N

tr (XN − zIN )−1

For compactly supported eigenvalue distribution,

mF (z) = −1z

∫1

1− λz

= −∞∑

k=0

MNk z−k−1

The Stieltjes transform is doubly more powerful than the moment approach!conveys more information than any K -finite sequence M1, . . . ,MK .

is not handicapped by the support compactness constraint.

however, Stieltjes transform methods, while stronger, are more painful to work with.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 32 / 102

Page 54: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Introduction of the Stieltjes Transform

Remark on the Stieltjes transform

If F is the eigenvalue distribution of a Hermitian matrix XN ∈ CN×N , we might denote

mX∆=mF , and

mX(z) =

∫1

λ− zdF (λ) =

1N

tr (XN − zIN )−1

For compactly supported eigenvalue distribution,

mF (z) = −1z

∫1

1− λz

= −∞∑

k=0

MNk z−k−1

The Stieltjes transform is doubly more powerful than the moment approach!conveys more information than any K -finite sequence M1, . . . ,MK .

is not handicapped by the support compactness constraint.

however, Stieltjes transform methods, while stronger, are more painful to work with.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 32 / 102

Page 55: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Introduction of the Stieltjes Transform

Remark on the Stieltjes transform

If F is the eigenvalue distribution of a Hermitian matrix XN ∈ CN×N , we might denote

mX∆=mF , and

mX(z) =

∫1

λ− zdF (λ) =

1N

tr (XN − zIN )−1

For compactly supported eigenvalue distribution,

mF (z) = −1z

∫1

1− λz

= −∞∑

k=0

MNk z−k−1

The Stieltjes transform is doubly more powerful than the moment approach!conveys more information than any K -finite sequence M1, . . . ,MK .

is not handicapped by the support compactness constraint.

however, Stieltjes transform methods, while stronger, are more painful to work with.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 32 / 102

Page 56: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Introduction of the Stieltjes Transform

Stieltjes transform proof of the Marcenko-Pastur law

We wish to prove that the spectrum of XXH, X ∈ CN×n, with entries CN (0, 1/n) tends to theMP law.From a matrix inversion lemma[

(XXH − zIN )−1]

11=

1−z − zyH(YHY− zIn)−1y

with XH =[y YH

].

From the trace lemma

yH(YHY− zIn)−1y '1n

tr(YHY− zIn)−1

for all large n.From the rank-1 perturbation lemma,

1n

tr(YHY− zIn)−1 '1n

tr(XHX− zIn)−1.

Since the spectrum of XXH is the same as that of XHX but for some zeros

1n

tr(XHX− zIn)−1 =1n

tr(XXH − zIN )−1 +N − n

n1z.

Replacing and summing over all diagonal components,

1N

tr(

XXH − zIN)−1

'1

1− Nn − z − z N

n1N tr

(XXH − zIN

)−1

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 33 / 102

Page 57: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Introduction of the Stieltjes Transform

Stieltjes transform proof of the Marcenko-Pastur law

We wish to prove that the spectrum of XXH, X ∈ CN×n, with entries CN (0, 1/n) tends to theMP law.From a matrix inversion lemma[

(XXH − zIN )−1]

11=

1−z − zyH(YHY− zIn)−1y

with XH =[y YH

].

From the trace lemma

yH(YHY− zIn)−1y '1n

tr(YHY− zIn)−1

for all large n.From the rank-1 perturbation lemma,

1n

tr(YHY− zIn)−1 '1n

tr(XHX− zIn)−1.

Since the spectrum of XXH is the same as that of XHX but for some zeros

1n

tr(XHX− zIn)−1 =1n

tr(XXH − zIN )−1 +N − n

n1z.

Replacing and summing over all diagonal components,

1N

tr(

XXH − zIN)−1

'1

1− Nn − z − z N

n1N tr

(XXH − zIN

)−1

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 33 / 102

Page 58: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Introduction of the Stieltjes Transform

Stieltjes transform proof of the Marcenko-Pastur law

We wish to prove that the spectrum of XXH, X ∈ CN×n, with entries CN (0, 1/n) tends to theMP law.From a matrix inversion lemma[

(XXH − zIN )−1]

11=

1−z − zyH(YHY− zIn)−1y

with XH =[y YH

].

From the trace lemma

yH(YHY− zIn)−1y '1n

tr(YHY− zIn)−1

for all large n.From the rank-1 perturbation lemma,

1n

tr(YHY− zIn)−1 '1n

tr(XHX− zIn)−1.

Since the spectrum of XXH is the same as that of XHX but for some zeros

1n

tr(XHX− zIn)−1 =1n

tr(XXH − zIN )−1 +N − n

n1z.

Replacing and summing over all diagonal components,

1N

tr(

XXH − zIN)−1

'1

1− Nn − z − z N

n1N tr

(XXH − zIN

)−1

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 33 / 102

Page 59: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Introduction of the Stieltjes Transform

Stieltjes transform proof of the Marcenko-Pastur law

We wish to prove that the spectrum of XXH, X ∈ CN×n, with entries CN (0, 1/n) tends to theMP law.From a matrix inversion lemma[

(XXH − zIN )−1]

11=

1−z − zyH(YHY− zIn)−1y

with XH =[y YH

].

From the trace lemma

yH(YHY− zIn)−1y '1n

tr(YHY− zIn)−1

for all large n.From the rank-1 perturbation lemma,

1n

tr(YHY− zIn)−1 '1n

tr(XHX− zIn)−1.

Since the spectrum of XXH is the same as that of XHX but for some zeros

1n

tr(XHX− zIn)−1 =1n

tr(XXH − zIN )−1 +N − n

n1z.

Replacing and summing over all diagonal components,

1N

tr(

XXH − zIN)−1

'1

1− Nn − z − z N

n1N tr

(XXH − zIN

)−1

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 33 / 102

Page 60: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Introduction of the Stieltjes Transform

Stieltjes transform proof of the Marcenko-Pastur law

We wish to prove that the spectrum of XXH, X ∈ CN×n, with entries CN (0, 1/n) tends to theMP law.From a matrix inversion lemma[

(XXH − zIN )−1]

11=

1−z − zyH(YHY− zIn)−1y

with XH =[y YH

].

From the trace lemma

yH(YHY− zIn)−1y '1n

tr(YHY− zIn)−1

for all large n.From the rank-1 perturbation lemma,

1n

tr(YHY− zIn)−1 '1n

tr(XHX− zIn)−1.

Since the spectrum of XXH is the same as that of XHX but for some zeros

1n

tr(XHX− zIn)−1 =1n

tr(XXH − zIN )−1 +N − n

n1z.

Replacing and summing over all diagonal components,

1N

tr(

XXH − zIN)−1

'1

1− Nn − z − z N

n1N tr

(XXH − zIN

)−1

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 33 / 102

Page 61: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Introduction of the Stieltjes Transform

Stieltjes transform proof of the Marcenko-Pastur law (2)

This is a second order polynomial of the type

mF (z) =1

1− c − z − zcmF (z)

with solution

mF (z) =1− c2cz

−12c−√

(1− c − z)2 − 4cz2cz

Using the Stieltjes inversion formula

f (x)∆=F ′(x) = lim

y→0

1π=[mF (x + iy)]

we finally obtain

f (x) = (1− c−1)+δ(x) +1

2πcx

√(x − a)+(b − x)+

with a = (1−√

c)2, b = (1 +√

c)2, of support [a, b].

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 34 / 102

Page 62: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Introduction of the Stieltjes Transform

Other asymptotic results using the Stieltjes transform

J. W. Silverstein, Z. D. Bai, “On the empirical distribution of eigenvalues of a class of largedimensional random matrices,” Journal of Multivariate Analysis, vol. 54, no. 2, pp. 175-192, 1995.

Theorem

Let BN = XNTNXHN ∈ CN×N , XN ∈ CN×n has i.i.d. entries of mean 0 and variance 1/N,

F TN ⇒ F T , n/N → c. Then, F BN ⇒ F almost surely, F having Stieltjes transform

mF (z) =

(c∫

t1 + tmF (z)

dF T (t)− z

)−1

=

[1N

tr TN(mF (z)TN + IN

)−1 − z]−1

which has a unique solution mF (z) ∈ C+ if z ∈ C+, and mF (z) > 0 if z < 0.

in general, no explicit expression for F .

Stieltjes transform of BN = T12N XH

NXNT12N with asymptotic distribution F ,

mF = cmF + (c − 1)1z

Spectrum of the sample covariance matrix model BN =∑n

i=1 xi xHi , with XH

N = [x1, . . . , xn], xi i.i.d.with zero mean and covariance TN = E[x1xH

1 ].

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 35 / 102

Page 63: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Introduction of the Stieltjes Transform

Other asymptotic results using the Stieltjes transform

J. W. Silverstein, Z. D. Bai, “On the empirical distribution of eigenvalues of a class of largedimensional random matrices,” Journal of Multivariate Analysis, vol. 54, no. 2, pp. 175-192, 1995.

Theorem

Let BN = XNTNXHN ∈ CN×N , XN ∈ CN×n has i.i.d. entries of mean 0 and variance 1/N,

F TN ⇒ F T , n/N → c. Then, F BN ⇒ F almost surely, F having Stieltjes transform

mF (z) =

(c∫

t1 + tmF (z)

dF T (t)− z

)−1

=

[1N

tr TN(mF (z)TN + IN

)−1 − z]−1

which has a unique solution mF (z) ∈ C+ if z ∈ C+, and mF (z) > 0 if z < 0.

in general, no explicit expression for F .

Stieltjes transform of BN = T12N XH

NXNT12N with asymptotic distribution F ,

mF = cmF + (c − 1)1z

Spectrum of the sample covariance matrix model BN =∑n

i=1 xi xHi , with XH

N = [x1, . . . , xn], xi i.i.d.with zero mean and covariance TN = E[x1xH

1 ].

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 35 / 102

Page 64: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Introduction of the Stieltjes Transform

Getting F ′ from mF

Remember thatf (x)

∆=F ′(x) = lim

y→0

1π=[mF (x + iy)]

to plot the density f (x), span z = x + iy on the line {x ∈ R, y = ε} parallel but close to thereal axis, solve mF (z) for each z, and plot =[mF (z)].

Example (Sample covariance matrix)

For N multiple of 3, let dF T (x) = 13 δ(x − 1) + 1

3 δ(x − 3) + 13 δ(x − K ) and let BN = T

12N XH

NXNT12N

with F BN → F , then

mF = cmF + (c − 1)1z

mF (z) =

(c∫

t1 + tmF (z)

dF T (t)− z

)−1

We take c = 1/10 and alternatively K = 7 and K = 4.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 36 / 102

Page 65: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Introduction of the Stieltjes Transform

Getting F ′ from mF

Remember thatf (x)

∆=F ′(x) = lim

y→0

1π=[mF (x + iy)]

to plot the density f (x), span z = x + iy on the line {x ∈ R, y = ε} parallel but close to thereal axis, solve mF (z) for each z, and plot =[mF (z)].

Example (Sample covariance matrix)

For N multiple of 3, let dF T (x) = 13 δ(x − 1) + 1

3 δ(x − 3) + 13 δ(x − K ) and let BN = T

12N XH

NXNT12N

with F BN → F , then

mF = cmF + (c − 1)1z

mF (z) =

(c∫

t1 + tmF (z)

dF T (t)− z

)−1

We take c = 1/10 and alternatively K = 7 and K = 4.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 36 / 102

Page 66: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Introduction of the Stieltjes Transform

Getting F ′ from mF

Remember thatf (x)

∆=F ′(x) = lim

y→0

1π=[mF (x + iy)]

to plot the density f (x), span z = x + iy on the line {x ∈ R, y = ε} parallel but close to thereal axis, solve mF (z) for each z, and plot =[mF (z)].

Example (Sample covariance matrix)

For N multiple of 3, let dF T (x) = 13 δ(x − 1) + 1

3 δ(x − 3) + 13 δ(x − K ) and let BN = T

12N XH

NXNT12N

with F BN → F , then

mF = cmF + (c − 1)1z

mF (z) =

(c∫

t1 + tmF (z)

dF T (t)− z

)−1

We take c = 1/10 and alternatively K = 7 and K = 4.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 36 / 102

Page 67: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Introduction of the Stieltjes Transform

Spectrum of the sample covariance matrix

1 3 70

0.2

0.4

0.6

Eigenvalues

Den

sity

Empirical eigenvalue distribution

f (x)

1 3 40

0.2

0.4

0.6

EigenvaluesD

ensi

ty

Empirical eigenvalue distribution

f (x)

Figure: Histogram of the eigenvalues of BN = T12N XH

N XN T12N , N = 3000, n = 300, with TN diagonal composed of

three evenly weighted masses in (i) 1, 3 and 7 on top, (ii) 1, 3 and 4 at bottom.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 37 / 102

Page 68: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Introduction of the Stieltjes Transform

The Shannon Transform

A. M. Tulino, S. Verdu, “Random matrix theory and wireless communications,” Now Publishers Inc.,2004.

Definition

Let F be a probability distribution, mF its Stieltjes transform, then the Shannon-transform VF of Fis defined as

VF (x)∆=

∫ ∞0

log(1 + xλ)dF (λ) =

∫ ∞x

(1t−mF (−t)

)dt

If F is the distribution function of the eigenvalues of XXH ∈ CN×N ,

VF (x) =1N

log det(

IN + xXXH).

Note that this last relation is fundamental to wireless communication purposes!

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 38 / 102

Page 69: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Properties of the Asymptotic Support and Spiked Models

Outline

1 Tools for Random Matrix TheoryClassical Random Matrix TheoryIntroduction to Large Dimensional Random Matrix TheoryThe Random Matrix PioneersThe Moment Approach and Free ProbabilityIntroduction of the Stieltjes TransformProperties of the Asymptotic Support and Spiked ModelsSummary of what we know and what is left to be done

2 Random Matrix Theory and Signal Source SensingSmall Dimensional AnalysisLarge Dimensional Random Matrix Analysis

3 Random Matrix Theory and Multi-Source Power EstimationOptimal detectorThe moment methodThe Stieltjes transform method

4 Random Matrix Theory and Failure Detection in Complex SystemsRandom matrix models of local failures in sensor networksFailure detection and localization

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 39 / 102

Page 70: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Properties of the Asymptotic Support and Spiked Models

No eigenvalues outside the support!

Z. Bai, J. Silverstein, “No eigenvalues outside the support of the limiting spectral distribution oflarge-dimensional sample covariance matrices,” Annals of Prob., vol. 26, no.1 pp. 316-345, 1998.

We showed that the eigenvalue distribution F BN of BN = XTXH, F TN ⇒ F T :is similar to a deterministic FNsometimes converges WEAKLY to F with Supp(F ) made of compact sets.

There is more:

0.1 1 3 100

0.025

0.05

0.075

0.1

Eigenvalues of XTXH

Den

sity

Eigenvalues of BN = XTXH

Limiting spectrum of BN

For all N0, there is no eigenvalue of BN outside Supp(F ) ∪⋃

N≥N0Supp(FN ), for all large N.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 40 / 102

Page 71: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Properties of the Asymptotic Support and Spiked Models

No eigenvalues outside the support!

Z. Bai, J. Silverstein, “No eigenvalues outside the support of the limiting spectral distribution oflarge-dimensional sample covariance matrices,” Annals of Prob., vol. 26, no.1 pp. 316-345, 1998.

We showed that the eigenvalue distribution F BN of BN = XTXH, F TN ⇒ F T :is similar to a deterministic FNsometimes converges WEAKLY to F with Supp(F ) made of compact sets.

There is more:

0.1 1 3 100

0.025

0.05

0.075

0.1

Eigenvalues of XTXH

Den

sity

Eigenvalues of BN = XTXH

Limiting spectrum of BN

For all N0, there is no eigenvalue of BN outside Supp(F ) ∪⋃

N≥N0Supp(FN ), for all large N.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 40 / 102

Page 72: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Properties of the Asymptotic Support and Spiked Models

No eigenvalues outside the support!

Z. Bai, J. Silverstein, “No eigenvalues outside the support of the limiting spectral distribution oflarge-dimensional sample covariance matrices,” Annals of Prob., vol. 26, no.1 pp. 316-345, 1998.

We showed that the eigenvalue distribution F BN of BN = XTXH, F TN ⇒ F T :is similar to a deterministic FNsometimes converges WEAKLY to F with Supp(F ) made of compact sets.

There is more:

0.1 1 3 100

0.025

0.05

0.075

0.1

Eigenvalues of XTXH

Den

sity

Eigenvalues of BN = XTXH

Limiting spectrum of BN

For all N0, there is no eigenvalue of BN outside Supp(F ) ∪⋃

N≥N0Supp(FN ), for all large N.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 40 / 102

Page 73: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Properties of the Asymptotic Support and Spiked Models

The spiked model

For T composed of finitely many eigenvalues with large multiplicities (e.g. T = IN ), noeigenvalue of BN outside Supp(F ).If, for r fixed, T is a rank-r perturbation of IN ,

diag( 1, . . . , 1︸ ︷︷ ︸multiplicity (N−r)

, 1 + ω1, . . . , 1 + ωr )

then, depending on whether ωi >√

N/n,

1 + ω1 + c1+ω1ω1

,1 + ω2 + c1+ω2ω2

0

0.2

0.4

0.6

0.8

Eigenvalues

Den

sity

Marcenko-Pastur law, c = 1/3

Empirical Eigenvalues

1 + ω1 + c1+ω1ω1

,

0

0.2

0.4

0.6

0.8

1

1.2

Eigenvalues

Den

sity

Marcenko-Pastur law, c = 5/4

Empirical Eigenvalues

Figure: Eigenvalues of BN = T12 XXHT

12 , T diagonal of 1’s but for the last four entries set to {3, 3, 2, 2}. On

top, N = 500, n = 1500. At bottom, N = 500, n = 400. Theoretical limit eigenvalues of BN are stressed.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 41 / 102

Page 74: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Properties of the Asymptotic Support and Spiked Models

The spiked model

For T composed of finitely many eigenvalues with large multiplicities (e.g. T = IN ), noeigenvalue of BN outside Supp(F ).If, for r fixed, T is a rank-r perturbation of IN ,

diag( 1, . . . , 1︸ ︷︷ ︸multiplicity (N−r)

, 1 + ω1, . . . , 1 + ωr )

then, depending on whether ωi >√

N/n,

1 + ω1 + c1+ω1ω1

,1 + ω2 + c1+ω2ω2

0

0.2

0.4

0.6

0.8

Eigenvalues

Den

sity

Marcenko-Pastur law, c = 1/3

Empirical Eigenvalues

1 + ω1 + c1+ω1ω1

,

0

0.2

0.4

0.6

0.8

1

1.2

Eigenvalues

Den

sity

Marcenko-Pastur law, c = 5/4

Empirical Eigenvalues

Figure: Eigenvalues of BN = T12 XXHT

12 , T diagonal of 1’s but for the last four entries set to {3, 3, 2, 2}. On

top, N = 500, n = 1500. At bottom, N = 500, n = 400. Theoretical limit eigenvalues of BN are stressed.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 41 / 102

Page 75: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Properties of the Asymptotic Support and Spiked Models

Limits for the spiked models

J. Baik and J. W. Silverstein, “Eigenvalues of large sample covariance matrices of spikedpopulation models,” Journal of Multivariate Analysis, vol. 97, no. 6, pp. 1382-1408, 2006.D. Paul, “Asymptotics of sample eigenstructure for a large dimensional spiked covariance model,”Statistica Sinica, vol. 17, no. 4, pp. 1617, 2007.

Assume T as above with:ω1 > · · · > ωr > 0 the population spikesu1, . . . , ur ∈ CN , the associated population eigenvectorsλ1 > . . . > λr the largest eigenvalues of BNu1, . . . , ur the associated sample eigenvalues

Then, with lim N/n = c, we have the first order limits:

λka.s.−→

{1 + ωk + c 1+ωk

ωk, ωk >

√c

(1 +√

c)2 , ωk ≤√

c.

|u∗k uk |2a.s.−→

1−cω−2

k1+cω−1

k, ωk >

√c

0 , ωk ≤√

c.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 42 / 102

Page 76: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Properties of the Asymptotic Support and Spiked Models

Limits for the spiked models

J. Baik and J. W. Silverstein, “Eigenvalues of large sample covariance matrices of spikedpopulation models,” Journal of Multivariate Analysis, vol. 97, no. 6, pp. 1382-1408, 2006.D. Paul, “Asymptotics of sample eigenstructure for a large dimensional spiked covariance model,”Statistica Sinica, vol. 17, no. 4, pp. 1617, 2007.

Assume T as above with:ω1 > · · · > ωr > 0 the population spikesu1, . . . , ur ∈ CN , the associated population eigenvectorsλ1 > . . . > λr the largest eigenvalues of BNu1, . . . , ur the associated sample eigenvalues

Then, with lim N/n = c, we have the first order limits:

λka.s.−→

{1 + ωk + c 1+ωk

ωk, ωk >

√c

(1 +√

c)2 , ωk ≤√

c.

|u∗k uk |2a.s.−→

1−cω−2

k1+cω−1

k, ωk >

√c

0 , ωk ≤√

c.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 42 / 102

Page 77: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Properties of the Asymptotic Support and Spiked Models

Second order limits for the spiked models

I. M. Johnstone, “On the distribution of the largest eigenvalue in principal components analysis,”Annals of Statistics, vol. 99, no. 2, pp. 295-327, 2001.J. Baik and G. Ben Arous and S. Peche, “Phase transition of the largest eigenvalue for non-nullcomplex sample covariance matrices,” The Annals of Prob., vol. 33, no. 5, pp. 1643-1697, 2005.R. Couillet and W. Hachem, “Local failure detection and identification in large sensor networks,”submitted to IEEE Transaction on Information Theory, 2011.

As well as the second order limits in the Gaussian case:If ωk >

√c

√N

|u∗k uk |2 −[

1−cω−2k

1+cω−1k

]λk −

[1 + ωk + c 1+ωk

ωk

]⇒ CN

0,

c2(1+ωk )2

(c+ωk )2(ω2k−c)

(c (1+ωk )2

(c+ωk )2 + 1)

(1+ωk )3c2

(ωk +c)2ωk(1+ωk )3c2

(ωk +c)2ωk

c(1+ωk )2(ω2k−c)

ω2k

If ωk <√

c

N23λk − (1 +

√c)2

(1 +√

c)43√

c⇒ T2

with T2 the complex Tracy-Widom distribution.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 43 / 102

Page 78: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Properties of the Asymptotic Support and Spiked Models

Second order limits for the spiked models

I. M. Johnstone, “On the distribution of the largest eigenvalue in principal components analysis,”Annals of Statistics, vol. 99, no. 2, pp. 295-327, 2001.J. Baik and G. Ben Arous and S. Peche, “Phase transition of the largest eigenvalue for non-nullcomplex sample covariance matrices,” The Annals of Prob., vol. 33, no. 5, pp. 1643-1697, 2005.R. Couillet and W. Hachem, “Local failure detection and identification in large sensor networks,”submitted to IEEE Transaction on Information Theory, 2011.

As well as the second order limits in the Gaussian case:If ωk >

√c

√N

|u∗k uk |2 −[

1−cω−2k

1+cω−1k

]λk −

[1 + ωk + c 1+ωk

ωk

]⇒ CN

0,

c2(1+ωk )2

(c+ωk )2(ω2k−c)

(c (1+ωk )2

(c+ωk )2 + 1)

(1+ωk )3c2

(ωk +c)2ωk(1+ωk )3c2

(ωk +c)2ωk

c(1+ωk )2(ω2k−c)

ω2k

If ωk <√

c

N23λk − (1 +

√c)2

(1 +√

c)43√

c⇒ T2

with T2 the complex Tracy-Widom distribution.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 43 / 102

Page 79: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Properties of the Asymptotic Support and Spiked Models

Second order statistics, ωk <√

c

−4 −2 0 2 40

0.1

0.2

0.3

0.4

0.5

Centered-scaled largest eigenvalue of T−12 XXHT−

12

Den

sity

Empirical Eigenvalues λ2Tracy-Widom law F2

−4 −2 0 2 40

0.1

0.2

0.3

0.4

0.5

Centered-scaled largest eigenvalue of T−12 XXHT−

12

Den

sity

Empirical Eigenvalues λ1Tracy-Widom law F2

Figure: Distribution of N23 c−

12 (1 +

√c)−

43[λk − (1 +

√c)2]

against the Tracy-Widom law for N = 500,

n = 1500, c = 1/3, T = diag(1, . . . , 1, 1.5) (0.5 <√

c). Empirical distribution taken over 10, 000 Monte-Carlosimulations.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 44 / 102

Page 80: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Properties of the Asymptotic Support and Spiked Models

Second order statistics, ωk >√

c

−1.5 −1 −0.5 0 0.5 1 1.50

0.5

1

1.5

Centered-scaled projection |u∗1 u1|2

Den

sity

Histogram of√

N(|u∗1 u1|2 − ξ(ω1))

Gaussian limit

−4 −2 0 2 40

0.2

0.4

0.6

Centered-scaled projection |u∗1 u1|2D

ensi

ty

Histogram of√

N(|u∗1 u1|2 − ξ(ω1))

Gaussian limit

Figure: Empirical and theoretical distribution of the fluctuations of u1 if X has i.i.d. CN (0, 1/n) entries,N/n = 1/8, N = 64, ω1 = 1 (left) or ω1 = 0.5 (right).

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 45 / 102

Page 81: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Summary of what we know and what is left to be done

Outline

1 Tools for Random Matrix TheoryClassical Random Matrix TheoryIntroduction to Large Dimensional Random Matrix TheoryThe Random Matrix PioneersThe Moment Approach and Free ProbabilityIntroduction of the Stieltjes TransformProperties of the Asymptotic Support and Spiked ModelsSummary of what we know and what is left to be done

2 Random Matrix Theory and Signal Source SensingSmall Dimensional AnalysisLarge Dimensional Random Matrix Analysis

3 Random Matrix Theory and Multi-Source Power EstimationOptimal detectorThe moment methodThe Stieltjes transform method

4 Random Matrix Theory and Failure Detection in Complex SystemsRandom matrix models of local failures in sensor networksFailure detection and localization

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 46 / 102

Page 82: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Summary of what we know and what is left to be done

Models studied with analytic tools

Stieltjes transform: models involving i.i.d. matrices

sample covariance matrix models, XTXH and T12 XHXT

12

doubly correlated models, R12 XTXHR

12 . With X Gaussian, Kronecker model.

doubly correlated models with external matrix, R12 XTXHR

12 + A.

variance profile, XXH, where X has i.i.d. entries with mean 0, variance σ2i,j .

Ricean channels, XXH + A, where X has a variance profile.

sum of doubly correlated i.i.d. matrices,∑K

k=1 R12k Xk Tk XH

k R12k .

information-plus-noise models (X + A)(X + A)H

frequency-selective doubly-correlated channels (∑K

k=1 R12k Xk Tk Xk R

12k )(∑K

k=1 R12k Xk Tk Xk R

12k )

sum of frequency-selective doubly-correlated channels∑K

k=1 R12k Hk Tk HH

k R12k , where

Hk =∑L

l=1 R′kl12 Xkl T′kl X

Hkl R′kl

12 .

R- and S-transforms: models involving a column subset W of unitary matrices

doubly correlated Haar matrix R12 WTWHR

12

sum of simply correlated Haar matrices∑K

k=1 Wk Tk WHk

In most cases, T and R can be taken random, but independent of X. More involved randommatrices, such as Vandermonde matrices, were not yet studied.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 47 / 102

Page 83: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Summary of what we know and what is left to be done

Models studied with analytic tools

Stieltjes transform: models involving i.i.d. matrices

sample covariance matrix models, XTXH and T12 XHXT

12

doubly correlated models, R12 XTXHR

12 . With X Gaussian, Kronecker model.

doubly correlated models with external matrix, R12 XTXHR

12 + A.

variance profile, XXH, where X has i.i.d. entries with mean 0, variance σ2i,j .

Ricean channels, XXH + A, where X has a variance profile.

sum of doubly correlated i.i.d. matrices,∑K

k=1 R12k Xk Tk XH

k R12k .

information-plus-noise models (X + A)(X + A)H

frequency-selective doubly-correlated channels (∑K

k=1 R12k Xk Tk Xk R

12k )(∑K

k=1 R12k Xk Tk Xk R

12k )

sum of frequency-selective doubly-correlated channels∑K

k=1 R12k Hk Tk HH

k R12k , where

Hk =∑L

l=1 R′kl12 Xkl T′kl X

Hkl R′kl

12 .

R- and S-transforms: models involving a column subset W of unitary matrices

doubly correlated Haar matrix R12 WTWHR

12

sum of simply correlated Haar matrices∑K

k=1 Wk Tk WHk

In most cases, T and R can be taken random, but independent of X. More involved randommatrices, such as Vandermonde matrices, were not yet studied.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 47 / 102

Page 84: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Summary of what we know and what is left to be done

Models studied with analytic tools

Stieltjes transform: models involving i.i.d. matrices

sample covariance matrix models, XTXH and T12 XHXT

12

doubly correlated models, R12 XTXHR

12 . With X Gaussian, Kronecker model.

doubly correlated models with external matrix, R12 XTXHR

12 + A.

variance profile, XXH, where X has i.i.d. entries with mean 0, variance σ2i,j .

Ricean channels, XXH + A, where X has a variance profile.

sum of doubly correlated i.i.d. matrices,∑K

k=1 R12k Xk Tk XH

k R12k .

information-plus-noise models (X + A)(X + A)H

frequency-selective doubly-correlated channels (∑K

k=1 R12k Xk Tk Xk R

12k )(∑K

k=1 R12k Xk Tk Xk R

12k )

sum of frequency-selective doubly-correlated channels∑K

k=1 R12k Hk Tk HH

k R12k , where

Hk =∑L

l=1 R′kl12 Xkl T′kl X

Hkl R′kl

12 .

R- and S-transforms: models involving a column subset W of unitary matrices

doubly correlated Haar matrix R12 WTWHR

12

sum of simply correlated Haar matrices∑K

k=1 Wk Tk WHk

In most cases, T and R can be taken random, but independent of X. More involved randommatrices, such as Vandermonde matrices, were not yet studied.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 47 / 102

Page 85: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Summary of what we know and what is left to be done

Models studied with moments/free probability

asymptotic resultsmost of the above models with Gaussian X.products V1VH

1 T1V2VH2 T2... of Vandermonde and deterministic matrices

conjecture: any probability space of matrices invariant to row or column permutations.

marginal studies, not yet fully exploredrectangular free convolution: singular values of rectangular matricesfinite size models. Instead of almost sure convergence of mXN as N →∞, we can study finite sizebehaviour of E[mXN ].

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 48 / 102

Page 86: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Summary of what we know and what is left to be done

Models studied with moments/free probability

asymptotic resultsmost of the above models with Gaussian X.products V1VH

1 T1V2VH2 T2... of Vandermonde and deterministic matrices

conjecture: any probability space of matrices invariant to row or column permutations.

marginal studies, not yet fully exploredrectangular free convolution: singular values of rectangular matricesfinite size models. Instead of almost sure convergence of mXN as N →∞, we can study finite sizebehaviour of E[mXN ].

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 48 / 102

Page 87: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Tools for Random Matrix Theory Summary of what we know and what is left to be done

Related bibliography

R. B. Dozier, J. W. Silverstein, “On the empirical distribution of eigenvalues of large dimensionalinformation-plus-noise-type matrices,” Journal of Multivariate Analysis, vol. 98, no. 4, pp. 678-694, 2007.

J. W. Silverstein, Z. D. Bai, “On the empirical distribution of eigenvalues of a class of large dimensionalrandom matrices,” Journal of Multivariate Analysis, vol. 54, no. 2, pp. 175-192, 1995.

J. W. Silverstein, S. Choi “Analysis of the limiting spectral distribution of large dimensional randommatrices” Journal of Multivariate Analysis, vol. 54, no. 2, pp. 295-309, 1995.

F. Benaych-Georges, “Rectangular random matrices, related free entropy and free Fisher’s information,”Arxiv preprint math/0512081, 2005.

Ø. Ryan, M. Debbah, “Multiplicative free convolution and information-plus-noise type matrices,” Arxivpreprint math.PR/0702342, 2007.

V. L. Girko, “Theory of Random Determinants,” Kluwer, Dordrecht, 1990.

R. Couillet, M. Debbah, J. W. Silverstein, “A deterministic equivalent for the capacity analysis of correlatedmulti-user MIMO channels,” submitted to IEEE Trans. on Information Theory.

V. L. Girko, “Theory of Random Determinants,” Kluwer, Dordrecht, 1990.

W. Hachem, Ph. Loubaton, J. Najim, “Deterministic Equivalents for Certain Functionals of Large RandomMatrices”, Annals of Applied Probability, vol. 17, no. 3, 2007.

M. J. M. Peacock, I. B. Collings, M. L. Honig, “Eigenvalue distributions of sums and products of largerandom matrices via incremental matrix expansions,” IEEE Trans. on Information Theory, vol. 54, no. 5, pp.2123, 2008.

D. Petz, J. Reffy, “On Asymptotics of large Haar distributed unitary matrices,” Periodica Math. Hungar., vol.49, pp. 103-117, 2004.

Ø. Ryan, A. Masucci, S. Yang, M. Debbah, “Finite dimensional statistical inference,” submitted to IEEETrans. on Information Theory, Dec. 2009.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 49 / 102

Page 88: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Signal Source Sensing

Outline

1 Tools for Random Matrix TheoryClassical Random Matrix TheoryIntroduction to Large Dimensional Random Matrix TheoryThe Random Matrix PioneersThe Moment Approach and Free ProbabilityIntroduction of the Stieltjes TransformProperties of the Asymptotic Support and Spiked ModelsSummary of what we know and what is left to be done

2 Random Matrix Theory and Signal Source SensingSmall Dimensional AnalysisLarge Dimensional Random Matrix Analysis

3 Random Matrix Theory and Multi-Source Power EstimationOptimal detectorThe moment methodThe Stieltjes transform method

4 Random Matrix Theory and Failure Detection in Complex SystemsRandom matrix models of local failures in sensor networksFailure detection and localization

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 50 / 102

Page 89: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Signal Source Sensing

Signal Sensing in Cognitive Radios

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 51 / 102

Page 90: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Signal Source Sensing Small Dimensional Analysis

Outline

1 Tools for Random Matrix TheoryClassical Random Matrix TheoryIntroduction to Large Dimensional Random Matrix TheoryThe Random Matrix PioneersThe Moment Approach and Free ProbabilityIntroduction of the Stieltjes TransformProperties of the Asymptotic Support and Spiked ModelsSummary of what we know and what is left to be done

2 Random Matrix Theory and Signal Source SensingSmall Dimensional AnalysisLarge Dimensional Random Matrix Analysis

3 Random Matrix Theory and Multi-Source Power EstimationOptimal detectorThe moment methodThe Stieltjes transform method

4 Random Matrix Theory and Failure Detection in Complex SystemsRandom matrix models of local failures in sensor networksFailure detection and localization

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 52 / 102

Page 91: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Signal Source Sensing Small Dimensional Analysis

Problem formulation

Assume the scenario ofan hypothetical signal source

√Px ∈ Cn of power P

a transfer channel H ∈ CN×n

a sensor network of n sensorsadditive noise σw ∈ CN of variance σ2IN .

We consider the following hypothesis test

y(m) =

{σw(m) , (H0)√

PHx(m) + σw(m) , (H1)

We wish to confront the hypotheses H0 and H1 given the data matrixY , [y(1), . . . , y(M)] ∈ CN×M .

We consider, in a Bayesian framework, the Neyman-Pearson test ratio

C(Y)∆=

PH1|Y,I(Y)

PH0|Y,I(Y)

with prior information I on H, x(m), σ, . . ..

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 53 / 102

Page 92: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Signal Source Sensing Small Dimensional Analysis

Problem formulation

Assume the scenario ofan hypothetical signal source

√Px ∈ Cn of power P

a transfer channel H ∈ CN×n

a sensor network of n sensorsadditive noise σw ∈ CN of variance σ2IN .

We consider the following hypothesis test

y(m) =

{σw(m) , (H0)√

PHx(m) + σw(m) , (H1)

We wish to confront the hypotheses H0 and H1 given the data matrixY , [y(1), . . . , y(M)] ∈ CN×M .

We consider, in a Bayesian framework, the Neyman-Pearson test ratio

C(Y)∆=

PH1|Y,I(Y)

PH0|Y,I(Y)

with prior information I on H, x(m), σ, . . ..

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 53 / 102

Page 93: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Signal Source Sensing Small Dimensional Analysis

Problem formulation

Assume the scenario ofan hypothetical signal source

√Px ∈ Cn of power P

a transfer channel H ∈ CN×n

a sensor network of n sensorsadditive noise σw ∈ CN of variance σ2IN .

We consider the following hypothesis test

y(m) =

{σw(m) , (H0)√

PHx(m) + σw(m) , (H1)

We wish to confront the hypotheses H0 and H1 given the data matrixY , [y(1), . . . , y(M)] ∈ CN×M .

We consider, in a Bayesian framework, the Neyman-Pearson test ratio

C(Y)∆=

PH1|Y,I(Y)

PH0|Y,I(Y)

with prior information I on H, x(m), σ, . . ..

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 53 / 102

Page 94: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Signal Source Sensing Small Dimensional Analysis

Problem formulation

Assume the scenario ofan hypothetical signal source

√Px ∈ Cn of power P

a transfer channel H ∈ CN×n

a sensor network of n sensorsadditive noise σw ∈ CN of variance σ2IN .

We consider the following hypothesis test

y(m) =

{σw(m) , (H0)√

PHx(m) + σw(m) , (H1)

We wish to confront the hypotheses H0 and H1 given the data matrixY , [y(1), . . . , y(M)] ∈ CN×M .

We consider, in a Bayesian framework, the Neyman-Pearson test ratio

C(Y)∆=

PH1|Y,I(Y)

PH0|Y,I(Y)

with prior information I on H, x(m), σ, . . ..

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 53 / 102

Page 95: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Signal Source Sensing Small Dimensional Analysis

A Bayesian framework for cognitive radios

We assume prior statistical and deterministic knowledge I on H, σ,PUsing the maximum entropy principle (MaxEnt), a prior P(H,σ,P)(H, σ,P) can be derived

PY|Hi ,I(Y) =

∫(H,σ,P)

PY|Hi ,I,H,σ,P(Y)P(H,σ,P)(H, σ,P)d(H, σ,P)

In the following,we derive the case P = 1, σ known and the knowledge about H conveys unitary invariance

E[tr HHH] known: this is what we assume here;E[HHH] = Q unknown but such that E[tr Q] is known;rank(HHH) known.

we compare alternative methods when P = 1 and σ are unknown.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 54 / 102

Page 96: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Signal Source Sensing Small Dimensional Analysis

A Bayesian framework for cognitive radios

We assume prior statistical and deterministic knowledge I on H, σ,PUsing the maximum entropy principle (MaxEnt), a prior P(H,σ,P)(H, σ,P) can be derived

PY|Hi ,I(Y) =

∫(H,σ,P)

PY|Hi ,I,H,σ,P(Y)P(H,σ,P)(H, σ,P)d(H, σ,P)

In the following,we derive the case P = 1, σ known and the knowledge about H conveys unitary invariance

E[tr HHH] known: this is what we assume here;E[HHH] = Q unknown but such that E[tr Q] is known;rank(HHH) known.

we compare alternative methods when P = 1 and σ are unknown.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 54 / 102

Page 97: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Signal Source Sensing Small Dimensional Analysis

A Bayesian framework for cognitive radios

We assume prior statistical and deterministic knowledge I on H, σ,PUsing the maximum entropy principle (MaxEnt), a prior P(H,σ,P)(H, σ,P) can be derived

PY|Hi ,I(Y) =

∫(H,σ,P)

PY|Hi ,I,H,σ,P(Y)P(H,σ,P)(H, σ,P)d(H, σ,P)

In the following,we derive the case P = 1, σ known and the knowledge about H conveys unitary invariance

E[tr HHH] known: this is what we assume here;E[HHH] = Q unknown but such that E[tr Q] is known;rank(HHH) known.

we compare alternative methods when P = 1 and σ are unknown.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 54 / 102

Page 98: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Signal Source Sensing Small Dimensional Analysis

Evaluation of PY|Hi ,I(Y)

by MaxEnt, X, W are standard Gaussian matrix with Xij ,Wij ∼ CN (0, 1).Under H0:

Y = σW

PY|H0,I(Y) =1

(πσ2)NMe−

1σ2 tr YYH

.

Under H1:

Y =[√

PH σIN] [X

W

]PY|H1

(Y) =

∫Σ≥0

PY|Σ,H1(Y,Σ)PΣ(Σ)dΣ

with Σ = E[y(1)y(1)H] = HHH + σ2IN .From unitary invariance of H, denoting Σ = UGUH, diag(G) = (g1, . . . , gn, σ2, . . . , σ2)

PY|H1(Y) =

∫U(N)×(σ2,∞)n

PY|UGUH,H1(Y,U,G)PU(U)P(g1,...,gn)(g1, . . . , gn)dUdg1 . . . dgn

wherePY|UGUH,H1

is Gaussian with zero mean and variance UGUH;

PU is a constant (dU is a Haar measure);

if H is Gaussian, P(g1−σ2,...,gn−σ2)

is the joint eigenvalue distribution of a central Wishart;

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 55 / 102

Page 99: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Signal Source Sensing Small Dimensional Analysis

Evaluation of PY|Hi ,I(Y)

by MaxEnt, X, W are standard Gaussian matrix with Xij ,Wij ∼ CN (0, 1).Under H0:

Y = σW

PY|H0,I(Y) =1

(πσ2)NMe−

1σ2 tr YYH

.

Under H1:

Y =[√

PH σIN] [X

W

]PY|H1

(Y) =

∫Σ≥0

PY|Σ,H1(Y,Σ)PΣ(Σ)dΣ

with Σ = E[y(1)y(1)H] = HHH + σ2IN .From unitary invariance of H, denoting Σ = UGUH, diag(G) = (g1, . . . , gn, σ2, . . . , σ2)

PY|H1(Y) =

∫U(N)×(σ2,∞)n

PY|UGUH,H1(Y,U,G)PU(U)P(g1,...,gn)(g1, . . . , gn)dUdg1 . . . dgn

wherePY|UGUH,H1

is Gaussian with zero mean and variance UGUH;

PU is a constant (dU is a Haar measure);

if H is Gaussian, P(g1−σ2,...,gn−σ2)

is the joint eigenvalue distribution of a central Wishart;

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 55 / 102

Page 100: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Signal Source Sensing Small Dimensional Analysis

Evaluation of PY|Hi ,I(Y)

by MaxEnt, X, W are standard Gaussian matrix with Xij ,Wij ∼ CN (0, 1).Under H0:

Y = σW

PY|H0,I(Y) =1

(πσ2)NMe−

1σ2 tr YYH

.

Under H1:

Y =[√

PH σIN] [X

W

]PY|H1

(Y) =

∫Σ≥0

PY|Σ,H1(Y,Σ)PΣ(Σ)dΣ

with Σ = E[y(1)y(1)H] = HHH + σ2IN .From unitary invariance of H, denoting Σ = UGUH, diag(G) = (g1, . . . , gn, σ2, . . . , σ2)

PY|H1(Y) =

∫U(N)×(σ2,∞)n

PY|UGUH,H1(Y,U,G)PU(U)P(g1,...,gn)(g1, . . . , gn)dUdg1 . . . dgn

wherePY|UGUH,H1

is Gaussian with zero mean and variance UGUH;

PU is a constant (dU is a Haar measure);

if H is Gaussian, P(g1−σ2,...,gn−σ2)

is the joint eigenvalue distribution of a central Wishart;

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 55 / 102

Page 101: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Signal Source Sensing Small Dimensional Analysis

Evaluation of PY|Hi ,I(Y)

by MaxEnt, X, W are standard Gaussian matrix with Xij ,Wij ∼ CN (0, 1).Under H0:

Y = σW

PY|H0,I(Y) =1

(πσ2)NMe−

1σ2 tr YYH

.

Under H1:

Y =[√

PH σIN] [X

W

]PY|H1

(Y) =

∫Σ≥0

PY|Σ,H1(Y,Σ)PΣ(Σ)dΣ

with Σ = E[y(1)y(1)H] = HHH + σ2IN .From unitary invariance of H, denoting Σ = UGUH, diag(G) = (g1, . . . , gn, σ2, . . . , σ2)

PY|H1(Y) =

∫U(N)×(σ2,∞)n

PY|UGUH,H1(Y,U,G)PU(U)P(g1,...,gn)(g1, . . . , gn)dUdg1 . . . dgn

wherePY|UGUH,H1

is Gaussian with zero mean and variance UGUH;

PU is a constant (dU is a Haar measure);

if H is Gaussian, P(g1−σ2,...,gn−σ2)

is the joint eigenvalue distribution of a central Wishart;

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 55 / 102

Page 102: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Signal Source Sensing Small Dimensional Analysis

Evaluation of PY|Hi ,I(Y)

by MaxEnt, X, W are standard Gaussian matrix with Xij ,Wij ∼ CN (0, 1).Under H0:

Y = σW

PY|H0,I(Y) =1

(πσ2)NMe−

1σ2 tr YYH

.

Under H1:

Y =[√

PH σIN] [X

W

]PY|H1

(Y) =

∫Σ≥0

PY|Σ,H1(Y,Σ)PΣ(Σ)dΣ

with Σ = E[y(1)y(1)H] = HHH + σ2IN .From unitary invariance of H, denoting Σ = UGUH, diag(G) = (g1, . . . , gn, σ2, . . . , σ2)

PY|H1(Y) =

∫U(N)×(σ2,∞)n

PY|UGUH,H1(Y,U,G)PU(U)P(g1,...,gn)(g1, . . . , gn)dUdg1 . . . dgn

wherePY|UGUH,H1

is Gaussian with zero mean and variance UGUH;

PU is a constant (dU is a Haar measure);

if H is Gaussian, P(g1−σ2,...,gn−σ2)

is the joint eigenvalue distribution of a central Wishart;

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 55 / 102

Page 103: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Signal Source Sensing Small Dimensional Analysis

Evaluation of PY|Hi ,I(Y)

by MaxEnt, X, W are standard Gaussian matrix with Xij ,Wij ∼ CN (0, 1).Under H0:

Y = σW

PY|H0,I(Y) =1

(πσ2)NMe−

1σ2 tr YYH

.

Under H1:

Y =[√

PH σIN] [X

W

]PY|H1

(Y) =

∫Σ≥0

PY|Σ,H1(Y,Σ)PΣ(Σ)dΣ

with Σ = E[y(1)y(1)H] = HHH + σ2IN .From unitary invariance of H, denoting Σ = UGUH, diag(G) = (g1, . . . , gn, σ2, . . . , σ2)

PY|H1(Y) =

∫U(N)×(σ2,∞)n

PY|UGUH,H1(Y,U,G)PU(U)P(g1,...,gn)(g1, . . . , gn)dUdg1 . . . dgn

wherePY|UGUH,H1

is Gaussian with zero mean and variance UGUH;

PU is a constant (dU is a Haar measure);

if H is Gaussian, P(g1−σ2,...,gn−σ2)

is the joint eigenvalue distribution of a central Wishart;

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 55 / 102

Page 104: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Signal Source Sensing Small Dimensional Analysis

Evaluation of PY|Hi ,I(Y)

by MaxEnt, X, W are standard Gaussian matrix with Xij ,Wij ∼ CN (0, 1).Under H0:

Y = σW

PY|H0,I(Y) =1

(πσ2)NMe−

1σ2 tr YYH

.

Under H1:

Y =[√

PH σIN] [X

W

]PY|H1

(Y) =

∫Σ≥0

PY|Σ,H1(Y,Σ)PΣ(Σ)dΣ

with Σ = E[y(1)y(1)H] = HHH + σ2IN .From unitary invariance of H, denoting Σ = UGUH, diag(G) = (g1, . . . , gn, σ2, . . . , σ2)

PY|H1(Y) =

∫U(N)×(σ2,∞)n

PY|UGUH,H1(Y,U,G)PU(U)P(g1,...,gn)(g1, . . . , gn)dUdg1 . . . dgn

wherePY|UGUH,H1

is Gaussian with zero mean and variance UGUH;

PU is a constant (dU is a Haar measure);

if H is Gaussian, P(g1−σ2,...,gn−σ2)

is the joint eigenvalue distribution of a central Wishart;

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 55 / 102

Page 105: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Signal Source Sensing Small Dimensional Analysis

Result in the Gaussian case, n = 1

R. Couillet, M. Debbah, “A Bayesian Framework for Collaborative Multi-Source Signal Sensing”,IEEE Transactions on Signal Processing, vol. 58, no. 10, pp. 5186-5195, 2010.

Theorem (Neyman-Pearson test)

The ratio C(Y) when the receiver knows n = 1, P = 1, E[ 1N tr HHH] = 1 and σ2, reads

C(Y) =1N

N∑l=1

σ2(N+M−1)eσ2+

λlσ2∏N

i=1i 6=l

(λl − λi )JN−M−1(σ2, λl )

with λ1, . . . , λN the eigenvalues of YYH and where

Jk (x , y) ,∫ +∞

xtk e−t− y

t dt .

non trivial dependency on λ1, . . . , λN

contrary to energy detector,∑

i λi is not a sufficient statistic;

integration over σ2 (or P when P 6= 1) is difficult.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 56 / 102

Page 106: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Signal Source Sensing Small Dimensional Analysis

Result in the Gaussian case, n = 1

R. Couillet, M. Debbah, “A Bayesian Framework for Collaborative Multi-Source Signal Sensing”,IEEE Transactions on Signal Processing, vol. 58, no. 10, pp. 5186-5195, 2010.

Theorem (Neyman-Pearson test)

The ratio C(Y) when the receiver knows n = 1, P = 1, E[ 1N tr HHH] = 1 and σ2, reads

C(Y) =1N

N∑l=1

σ2(N+M−1)eσ2+

λlσ2∏N

i=1i 6=l

(λl − λi )JN−M−1(σ2, λl )

with λ1, . . . , λN the eigenvalues of YYH and where

Jk (x , y) ,∫ +∞

xtk e−t− y

t dt .

non trivial dependency on λ1, . . . , λN

contrary to energy detector,∑

i λi is not a sufficient statistic;

integration over σ2 (or P when P 6= 1) is difficult.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 56 / 102

Page 107: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Signal Source Sensing Small Dimensional Analysis

Result in the Gaussian case, n = 1

R. Couillet, M. Debbah, “A Bayesian Framework for Collaborative Multi-Source Signal Sensing”,IEEE Transactions on Signal Processing, vol. 58, no. 10, pp. 5186-5195, 2010.

Theorem (Neyman-Pearson test)

The ratio C(Y) when the receiver knows n = 1, P = 1, E[ 1N tr HHH] = 1 and σ2, reads

C(Y) =1N

N∑l=1

σ2(N+M−1)eσ2+

λlσ2∏N

i=1i 6=l

(λl − λi )JN−M−1(σ2, λl )

with λ1, . . . , λN the eigenvalues of YYH and where

Jk (x , y) ,∫ +∞

xtk e−t− y

t dt .

non trivial dependency on λ1, . . . , λN

contrary to energy detector,∑

i λi is not a sufficient statistic;

integration over σ2 (or P when P 6= 1) is difficult.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 56 / 102

Page 108: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Signal Source Sensing Small Dimensional Analysis

Comparison to energy detector

1 · 10−3 5 · 10−3 1 · 10−2 2 · 10−20.2

0.25

0.3

0.35

0.4

0.45

0.5

0.55

0.6

0.65

False alarm rate

Cor

rect

dete

ctio

nra

te

Energy detector

Neyman-Pearson test

Figure: ROC curve for single-source detection, K = 1, N = 4, M = 8, SNR = −3 dB, FAR range of practicalinterest, with signal power P = 0 dBm, either known or unknown at the receiver.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 57 / 102

Page 109: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Signal Source Sensing Small Dimensional Analysis

Comparison to energy detector

1 · 10−3 5 · 10−3 1 · 10−2 2 · 10−20.2

0.25

0.3

0.35

0.4

0.45

0.5

0.55

0.6

0.65

False alarm rate

Cor

rect

dete

ctio

nra

te

Energy detector

Neyman-Pearson test

Neyman-Pearson test (P unknown)

Figure: ROC curve for single-source detection, K = 1, N = 4, M = 8, SNR = −3 dB, FAR range of practicalinterest, with signal power P = 0 dBm, either known or unknown at the receiver.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 57 / 102

Page 110: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Signal Source Sensing Large Dimensional Random Matrix Analysis

Outline

1 Tools for Random Matrix TheoryClassical Random Matrix TheoryIntroduction to Large Dimensional Random Matrix TheoryThe Random Matrix PioneersThe Moment Approach and Free ProbabilityIntroduction of the Stieltjes TransformProperties of the Asymptotic Support and Spiked ModelsSummary of what we know and what is left to be done

2 Random Matrix Theory and Signal Source SensingSmall Dimensional AnalysisLarge Dimensional Random Matrix Analysis

3 Random Matrix Theory and Multi-Source Power EstimationOptimal detectorThe moment methodThe Stieltjes transform method

4 Random Matrix Theory and Failure Detection in Complex SystemsRandom matrix models of local failures in sensor networksFailure detection and localization

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 58 / 102

Page 111: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Signal Source Sensing Large Dimensional Random Matrix Analysis

Reminder: the Marcenko-Pastur Law

If H0, then the eigenvalues of 1N YYH = σ2 1

N WWH asymptotically distribute as

σ2(1−√

c)2 σ2(1 +√

c)20

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

x

Den

sity

f c(x

)

Figure: Marcenko-Pastur law with c = lim N/L.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 59 / 102

Page 112: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Signal Source Sensing Large Dimensional Random Matrix Analysis

Alternative Tests in Large Random Matrix Theory

Reminder:

Theorem

P(no eigenvalues outside [σ2(1−√

c)2, σ2(1 +√

c)2] for all large N) = 1

If H0,λmax( 1

N YYH)

λmin( 1N YYH)

a.s.−→(1 +

√c)2

(1−√

c)2

independent of the SNR!

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 60 / 102

Page 113: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Signal Source Sensing Large Dimensional Random Matrix Analysis

Alternative Tests in Large Random Matrix Theory

Reminder:

Theorem

P(no eigenvalues outside [σ2(1−√

c)2, σ2(1 +√

c)2] for all large N) = 1

If H0,λmax( 1

N YYH)

λmin( 1N YYH)

a.s.−→(1 +

√c)2

(1−√

c)2

independent of the SNR!

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 60 / 102

Page 114: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Signal Source Sensing Large Dimensional Random Matrix Analysis

Alternative Tests in Large Random Matrix Theory

Reminder:

Theorem

P(no eigenvalues outside [σ2(1−√

c)2, σ2(1 +√

c)2] for all large N) = 1

If H0,λmax( 1

N YYH)

λmin( 1N YYH)

a.s.−→(1 +

√c)2

(1−√

c)2

independent of the SNR!

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 60 / 102

Page 115: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Signal Source Sensing Large Dimensional Random Matrix Analysis

Conditioning Number Test

L. S. Cardoso, M. Debbah, P. Bianchi, J. Najim, “Cooperative spectrum sensing using randommatrix theory,” International Symposium on Wireless Pervasive Computing, Santorini, Greece,2008.

Conditioning number test

Ccond(Y) =λmax( 1

N YYH)

λmin( 1N YYH)

if Ccond(Y) > τ , presence of a signal.

if Ccond(Y) < τ , absence of signal.

but this is ad-hoc! how good does it compare to optimal?

can we find non ad-hoc approaches?

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 61 / 102

Page 116: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Signal Source Sensing Large Dimensional Random Matrix Analysis

Conditioning Number Test

L. S. Cardoso, M. Debbah, P. Bianchi, J. Najim, “Cooperative spectrum sensing using randommatrix theory,” International Symposium on Wireless Pervasive Computing, Santorini, Greece,2008.

Conditioning number test

Ccond(Y) =λmax( 1

N YYH)

λmin( 1N YYH)

if Ccond(Y) > τ , presence of a signal.

if Ccond(Y) < τ , absence of signal.

but this is ad-hoc! how good does it compare to optimal?

can we find non ad-hoc approaches?

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 61 / 102

Page 117: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Signal Source Sensing Large Dimensional Random Matrix Analysis

Generalized Likelihood Ratio Test

Bianchi, J. Najim, M. Maida, M. Debbah, “Performance of Some Eigen-based Hypothesis Tests forCollaborative Sensing,” Proceedings of IEEE Statistical Signal Processing Workshop, 2009.

Generalized Likelihood Ratio Test

Alternative test to Neyman-Pearson test,

CGLRT(Y) =supH,σ2 PH1|Y,H,σ2 (Y)

supσ2 PH0|Y,σ2 (Y)

based on ratios of maximum likelihood

clearly sub-optimal but avoid the need for priors.

GLRT test

CGLRT(Y) =

(1−1N

)N−1 λmax( 1N YYH)

1N∑N

i=1 λi

(1−

λmax( 1N YYH)∑N

i=1 λi

)N−1−L

.

Contrary to the ad-hoc conditioning number test, GLRT based on

λmax1N tr(YYH)

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 62 / 102

Page 118: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Signal Source Sensing Large Dimensional Random Matrix Analysis

Generalized Likelihood Ratio Test

Bianchi, J. Najim, M. Maida, M. Debbah, “Performance of Some Eigen-based Hypothesis Tests forCollaborative Sensing,” Proceedings of IEEE Statistical Signal Processing Workshop, 2009.

Generalized Likelihood Ratio Test

Alternative test to Neyman-Pearson test,

CGLRT(Y) =supH,σ2 PH1|Y,H,σ2 (Y)

supσ2 PH0|Y,σ2 (Y)

based on ratios of maximum likelihood

clearly sub-optimal but avoid the need for priors.

GLRT test

CGLRT(Y) =

(1−1N

)N−1 λmax( 1N YYH)

1N∑N

i=1 λi

(1−

λmax( 1N YYH)∑N

i=1 λi

)N−1−L

.

Contrary to the ad-hoc conditioning number test, GLRT based on

λmax1N tr(YYH)

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 62 / 102

Page 119: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Signal Source Sensing Large Dimensional Random Matrix Analysis

Neyman-Pearson Test against Asymptotic Tests

1 · 10−3 5 · 10−3 1 · 10−2 2 · 10−20

0.1

0.2

0.3

0.4

0.5

0.6

0.7

False alarm rate

Cor

rect

dete

ctio

nra

te

Bayesian, Jeffreys

Bayesian, uniform

Cond. number

GLRT

Figure: ROC curve for a priori unknown σ2 of the Bayesian method, conditioning number method and GLRTmethod, M = 1, N = 4, L = 8, SNR = 0 dB. For the Bayesian method, both uniform and Jeffreys prior, withexponent α = 1, are provided.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 63 / 102

Page 120: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation

Outline

1 Tools for Random Matrix TheoryClassical Random Matrix TheoryIntroduction to Large Dimensional Random Matrix TheoryThe Random Matrix PioneersThe Moment Approach and Free ProbabilityIntroduction of the Stieltjes TransformProperties of the Asymptotic Support and Spiked ModelsSummary of what we know and what is left to be done

2 Random Matrix Theory and Signal Source SensingSmall Dimensional AnalysisLarge Dimensional Random Matrix Analysis

3 Random Matrix Theory and Multi-Source Power EstimationOptimal detectorThe moment methodThe Stieltjes transform method

4 Random Matrix Theory and Failure Detection in Complex SystemsRandom matrix models of local failures in sensor networksFailure detection and localization

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 64 / 102

Page 121: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation

Application Context: Coverage range in Femtocells

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 65 / 102

Page 122: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation

Problem Statement

We now consider the model

y(m) =K∑

k=1

√Pk Hk x(m)

k + σw(m)

and wish to infer P1, . . . ,PK .

With Y = [y(1), . . . , y(M)], this can be rewritten

Y =K∑

k=1

√Pk Hk Xk + σW =

[√P1H1 · · ·

√PK HK

]︸ ︷︷ ︸,HP

12

X1...

XK

︸ ︷︷ ︸,X

+σW =[HP

12 σIN

] [XW

].

If H, (XT WT) are unitarily invariant, Y is unitarily invariant.

Most information about P1, . . . ,PK is contained in the eigenvalues of BN ,1M YYH.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 66 / 102

Page 123: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation

Problem Statement

We now consider the model

y(m) =K∑

k=1

√Pk Hk x(m)

k + σw(m)

and wish to infer P1, . . . ,PK .

With Y = [y(1), . . . , y(M)], this can be rewritten

Y =K∑

k=1

√Pk Hk Xk + σW =

[√P1H1 · · ·

√PK HK

]︸ ︷︷ ︸,HP

12

X1...

XK

︸ ︷︷ ︸,X

+σW =[HP

12 σIN

] [XW

].

If H, (XT WT) are unitarily invariant, Y is unitarily invariant.

Most information about P1, . . . ,PK is contained in the eigenvalues of BN ,1M YYH.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 66 / 102

Page 124: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation

Problem Statement

We now consider the model

y(m) =K∑

k=1

√Pk Hk x(m)

k + σw(m)

and wish to infer P1, . . . ,PK .

With Y = [y(1), . . . , y(M)], this can be rewritten

Y =K∑

k=1

√Pk Hk Xk + σW =

[√P1H1 · · ·

√PK HK

]︸ ︷︷ ︸,HP

12

X1...

XK

︸ ︷︷ ︸,X

+σW =[HP

12 σIN

] [XW

].

If H, (XT WT) are unitarily invariant, Y is unitarily invariant.

Most information about P1, . . . ,PK is contained in the eigenvalues of BN ,1M YYH.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 66 / 102

Page 125: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation

Problem Statement

We now consider the model

y(m) =K∑

k=1

√Pk Hk x(m)

k + σw(m)

and wish to infer P1, . . . ,PK .

With Y = [y(1), . . . , y(M)], this can be rewritten

Y =K∑

k=1

√Pk Hk Xk + σW =

[√P1H1 · · ·

√PK HK

]︸ ︷︷ ︸,HP

12

X1...

XK

︸ ︷︷ ︸,X

+σW =[HP

12 σIN

] [XW

].

If H, (XT WT) are unitarily invariant, Y is unitarily invariant.

Most information about P1, . . . ,PK is contained in the eigenvalues of BN ,1M YYH.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 66 / 102

Page 126: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation

Problem Statement

We now consider the model

y(m) =K∑

k=1

√Pk Hk x(m)

k + σw(m)

and wish to infer P1, . . . ,PK .

With Y = [y(1), . . . , y(M)], this can be rewritten

Y =K∑

k=1

√Pk Hk Xk + σW =

[√P1H1 · · ·

√PK HK

]︸ ︷︷ ︸,HP

12

X1...

XK

︸ ︷︷ ︸,X

+σW =[HP

12 σIN

] [XW

].

If H, (XT WT) are unitarily invariant, Y is unitarily invariant.

Most information about P1, . . . ,PK is contained in the eigenvalues of BN ,1M YYH.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 66 / 102

Page 127: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation

Problem Statement

We now consider the model

y(m) =K∑

k=1

√Pk Hk x(m)

k + σw(m)

and wish to infer P1, . . . ,PK .

With Y = [y(1), . . . , y(M)], this can be rewritten

Y =K∑

k=1

√Pk Hk Xk + σW =

[√P1H1 · · ·

√PK HK

]︸ ︷︷ ︸,HP

12

X1...

XK

︸ ︷︷ ︸,X

+σW =[HP

12 σIN

] [XW

].

If H, (XT WT) are unitarily invariant, Y is unitarily invariant.

Most information about P1, . . . ,PK is contained in the eigenvalues of BN ,1M YYH.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 66 / 102

Page 128: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation

From small to large system analysis

0.1 1 3 100

0.025

0.05

0.075

0.1

Eigenvalues of YYH

Den

sity

Eigenvalues of BN = 1M YYH

The classical approach requires to evaluate PP1,...,PK |Y

assuming Gaussian parameters, this is similar to previous calculus

leads to a very involved expression

prohibitively expensive to evaluate even for small N, nk , M

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 67 / 102

Page 129: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation

From small to large system analysis

0.1 1 3 100

0.025

0.05

0.075

0.1

Eigenvalues of YYH

Den

sity

Eigenvalues of BN = 1M YYH

Limiting spectrum of BN

Assuming dimensions N, nk ,M grow large, large dimensional random matrix theory providesa link between:

the “observation”: the limiting spectral distribution (l.s.d.) of BN ;the “hidden parameters”: the powers P1, . . . ,PK , i.e. the l.s.d. of P.

consistent estimators of the hidden parameters.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 67 / 102

Page 130: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation Optimal detector

Outline

1 Tools for Random Matrix TheoryClassical Random Matrix TheoryIntroduction to Large Dimensional Random Matrix TheoryThe Random Matrix PioneersThe Moment Approach and Free ProbabilityIntroduction of the Stieltjes TransformProperties of the Asymptotic Support and Spiked ModelsSummary of what we know and what is left to be done

2 Random Matrix Theory and Signal Source SensingSmall Dimensional AnalysisLarge Dimensional Random Matrix Analysis

3 Random Matrix Theory and Multi-Source Power EstimationOptimal detectorThe moment methodThe Stieltjes transform method

4 Random Matrix Theory and Failure Detection in Complex SystemsRandom matrix models of local failures in sensor networksFailure detection and localization

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 68 / 102

Page 131: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation Optimal detector

Optimal ML/MMSE estimators

R. Couillet and M. Guillaud, “Performance of Statistical Inference Methods for the EnergyEstimation of Multiple Sources,” Invited Paper, IEEE International Communications Conference,Nice, France, 2011.

conditional probability

Theorem

Assume P1, . . . ,PK have multiplicity n1 = . . . = nK = 1. Then, denoting λ = (λ1, . . . , λN ) theeigenvalues of BN

PY|P1,...,PK(Y) =

C(−1)Nn+1eNσ2 ∑n

i=11Pi

σ2(N−n)(M−n)∏n

i=1 PiM−n+1∆(P)

∑a∈SN

n

(−1)|a|sgn(a)eMσ2 |λ[a]|

×∆(diag(λ[a]))

∆(diag(λ))

∑b∈Sn

sgn(b)n∏

i=1

JN−M−1

(Nσ2

Pbi

,NMλai

Pbi

).

ML/MMSE estimators

P(ML)

= arg maxP1,...,PK

PY|P1,...,PK(Y)

P(MMSE)

=

∫[0,∞)K

(P1, . . . ,PK )PP1,...,PK |Y(P1, . . . ,PK )dP1 . . . dPK

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 69 / 102

Page 132: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation Optimal detector

Optimal ML/MMSE estimators

R. Couillet and M. Guillaud, “Performance of Statistical Inference Methods for the EnergyEstimation of Multiple Sources,” Invited Paper, IEEE International Communications Conference,Nice, France, 2011.

conditional probability

Theorem

Assume P1, . . . ,PK have multiplicity n1 = . . . = nK = 1. Then, denoting λ = (λ1, . . . , λN ) theeigenvalues of BN

PY|P1,...,PK(Y) =

C(−1)Nn+1eNσ2 ∑n

i=11Pi

σ2(N−n)(M−n)∏n

i=1 PiM−n+1∆(P)

∑a∈SN

n

(−1)|a|sgn(a)eMσ2 |λ[a]|

×∆(diag(λ[a]))

∆(diag(λ))

∑b∈Sn

sgn(b)n∏

i=1

JN−M−1

(Nσ2

Pbi

,NMλai

Pbi

).

ML/MMSE estimators

P(ML)

= arg maxP1,...,PK

PY|P1,...,PK(Y)

P(MMSE)

=

∫[0,∞)K

(P1, . . . ,PK )PP1,...,PK |Y(P1, . . . ,PK )dP1 . . . dPK

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 69 / 102

Page 133: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The moment method

Outline

1 Tools for Random Matrix TheoryClassical Random Matrix TheoryIntroduction to Large Dimensional Random Matrix TheoryThe Random Matrix PioneersThe Moment Approach and Free ProbabilityIntroduction of the Stieltjes TransformProperties of the Asymptotic Support and Spiked ModelsSummary of what we know and what is left to be done

2 Random Matrix Theory and Signal Source SensingSmall Dimensional AnalysisLarge Dimensional Random Matrix Analysis

3 Random Matrix Theory and Multi-Source Power EstimationOptimal detectorThe moment methodThe Stieltjes transform method

4 Random Matrix Theory and Failure Detection in Complex SystemsRandom matrix models of local failures in sensor networksFailure detection and localization

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 70 / 102

Page 134: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The moment method

Reminder on free deconvolution

Free probability provides tools to compute

pk =1K

K∑i=1

λ(P)k =1K

K∑i=1

Pki

as a function of

bk =1N

N∑i=1

λ(1M

YYH)k

One can obtain all the successive sum powers of P1, . . . ,PK .From that, we can infer on the values of each Pk !The tools come from the relations,

cumulant to moment (and also moment to cumulant),

Mn =∑

π∈NC(n)

∏V∈π

C|V|

Sums of cumulants for asymptotically free A and B (of measure µA � µB ),

Ck (A + B) = Ck (A) + Ck (B)

Products of cumulants for asymptotically free A and B (of measure µA � µB ),

Mn(AB) =∑

(π1,π2)∈NC(n)

∏V1∈π1V2∈π2

C|V1|(A)C|V2|(B)

Moments of information plus noise models BN = 1n (AN + σWN ) (AN + σWN )H,

µB =(

(µA � µc)� δσ2)� µc

with µc the Marcenko-Pastur law with ratio c.R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 71 / 102

Page 135: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The moment method

Reminder on free deconvolution

Free probability provides tools to compute

pk =1K

K∑i=1

λ(P)k =1K

K∑i=1

Pki

as a function of

bk =1N

N∑i=1

λ(1M

YYH)k

One can obtain all the successive sum powers of P1, . . . ,PK .From that, we can infer on the values of each Pk !The tools come from the relations,

cumulant to moment (and also moment to cumulant),

Mn =∑

π∈NC(n)

∏V∈π

C|V|

Sums of cumulants for asymptotically free A and B (of measure µA � µB ),

Ck (A + B) = Ck (A) + Ck (B)

Products of cumulants for asymptotically free A and B (of measure µA � µB ),

Mn(AB) =∑

(π1,π2)∈NC(n)

∏V1∈π1V2∈π2

C|V1|(A)C|V2|(B)

Moments of information plus noise models BN = 1n (AN + σWN ) (AN + σWN )H,

µB =(

(µA � µc)� δσ2)� µc

with µc the Marcenko-Pastur law with ratio c.R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 71 / 102

Page 136: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The moment method

Reminder on free deconvolution

Free probability provides tools to compute

pk =1K

K∑i=1

λ(P)k =1K

K∑i=1

Pki

as a function of

bk =1N

N∑i=1

λ(1M

YYH)k

One can obtain all the successive sum powers of P1, . . . ,PK .From that, we can infer on the values of each Pk !The tools come from the relations,

cumulant to moment (and also moment to cumulant),

Mn =∑

π∈NC(n)

∏V∈π

C|V|

Sums of cumulants for asymptotically free A and B (of measure µA � µB ),

Ck (A + B) = Ck (A) + Ck (B)

Products of cumulants for asymptotically free A and B (of measure µA � µB ),

Mn(AB) =∑

(π1,π2)∈NC(n)

∏V1∈π1V2∈π2

C|V1|(A)C|V2|(B)

Moments of information plus noise models BN = 1n (AN + σWN ) (AN + σWN )H,

µB =(

(µA � µc)� δσ2)� µc

with µc the Marcenko-Pastur law with ratio c.R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 71 / 102

Page 137: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The moment method

Free deconvolution approach

one can deconvolve YYH in three steps,

an information-plus-noise model with “deterministic matrix” HP12 XXHP

12 HH,

YYH = (HP12 X + σW)(HP

12 X + σW)H

from HP12 XXHP

12 HH, up to a Gram matrix commutation, we can deconvolve the signal X,

P12 HHHP

12 XXH

from P12 HHHP

12 , a new matrix commutation allows one to deconvolve HHH

PHHH

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 72 / 102

Page 138: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The moment method

Free deconvolution approach

one can deconvolve YYH in three steps,

an information-plus-noise model with “deterministic matrix” HP12 XXHP

12 HH,

YYH = (HP12 X + σW)(HP

12 X + σW)H

from HP12 XXHP

12 HH, up to a Gram matrix commutation, we can deconvolve the signal X,

P12 HHHP

12 XXH

from P12 HHHP

12 , a new matrix commutation allows one to deconvolve HHH

PHHH

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 72 / 102

Page 139: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The moment method

Free deconvolution approach

one can deconvolve YYH in three steps,

an information-plus-noise model with “deterministic matrix” HP12 XXHP

12 HH,

YYH = (HP12 X + σW)(HP

12 X + σW)H

from HP12 XXHP

12 HH, up to a Gram matrix commutation, we can deconvolve the signal X,

P12 HHHP

12 XXH

from P12 HHHP

12 , a new matrix commutation allows one to deconvolve HHH

PHHH

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 72 / 102

Page 140: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The moment method

Free deconvolution approach

In terms of distributions

µ∞1M HP

12 XXHP

12 HH

=((µ∞BN

� µ 1c

)� δσ2

)� µ 1

c

µ∞P

12 HHHP

12

= µ∞1M P

12 HHHP

12 XXH

� µ 1cc0

µ∞P = µ∞PHHH � µ 1c0

Numerically, with bm , 1N E[tr Bm

N

]and pm ,

∑Kk=1

nkn Pm

k

b1 = N−1np1 + 1

b2 =(

N−2M−1n + N−1n)

p2 +(

N−2n2 + N−1M−1n2) p12 +

(2N−1n + 2M−1n

)p1 +

(1 + NM−1)

b3 =(

3N−3M−2n + N−3n + 6N−2M−1n + N−1M−2n + N−1n)

p3

+(

6N−3M−1n2 + 6N−2M−2n2 + 3N−2n2 + 3N−1M−1n2) p2p1

+(

N−3M−2n3 + N−3n3 + 3N−2M−1n3 + N−1M−2n3) p13

+(

6N−2M−1n + 6N−1M−2n + 3N−1n + 3M−1n)

p2

+(

3N−2M−2n2 + 3N−2n2 + 9N−1M−1n2 + 3M−2n2) p12

+(

3N−1M−2n + 3N−1n + 9M−1n + 3NM−2n)

p1.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 73 / 102

Page 141: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The moment method

Free deconvolution approach

In terms of distributions

µ∞1M HP

12 XXHP

12 HH

=((µ∞BN

� µ 1c

)� δσ2

)� µ 1

c

µ∞P

12 HHHP

12

= µ∞1M P

12 HHHP

12 XXH

� µ 1cc0

µ∞P = µ∞PHHH � µ 1c0

Numerically, with bm , 1N E[tr Bm

N

]and pm ,

∑Kk=1

nkn Pm

k

b1 = N−1np1 + 1

b2 =(

N−2M−1n + N−1n)

p2 +(

N−2n2 + N−1M−1n2) p12 +

(2N−1n + 2M−1n

)p1 +

(1 + NM−1)

b3 =(

3N−3M−2n + N−3n + 6N−2M−1n + N−1M−2n + N−1n)

p3

+(

6N−3M−1n2 + 6N−2M−2n2 + 3N−2n2 + 3N−1M−1n2) p2p1

+(

N−3M−2n3 + N−3n3 + 3N−2M−1n3 + N−1M−2n3) p13

+(

6N−2M−1n + 6N−1M−2n + 3N−1n + 3M−1n)

p2

+(

3N−2M−2n2 + 3N−2n2 + 9N−1M−1n2 + 3M−2n2) p12

+(

3N−1M−2n + 3N−1n + 9M−1n + 3NM−2n)

p1.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 73 / 102

Page 142: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The moment method

Newton-Girard inversion

Once the pmi are obtained, in the particular case n1 = . . . = nK , Newton-Girard formulas give

P1, . . . ,PK as the solutions of

X K − Π1X K−1 + Π2X K−2 − . . .+ (−1)K ΠK = 0

with Π1, . . . ,Πn recursively computed from

(−1)K K ΠK +K∑

i=1

(−1)K +i pi ΠK−i = 0.

fast method but with major limitations!polynomial solutions can be purely complexmoment estimates propagate errors to higher order moments (2nd estimate 103 worse than 1st!)modifying Newton-Girard formulas boils down to ad-hoc methods...ML and MMSE methods are prohibitively expensive.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 74 / 102

Page 143: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The moment method

Newton-Girard inversion

Once the pmi are obtained, in the particular case n1 = . . . = nK , Newton-Girard formulas give

P1, . . . ,PK as the solutions of

X K − Π1X K−1 + Π2X K−2 − . . .+ (−1)K ΠK = 0

with Π1, . . . ,Πn recursively computed from

(−1)K K ΠK +K∑

i=1

(−1)K +i pi ΠK−i = 0.

fast method but with major limitations!polynomial solutions can be purely complexmoment estimates propagate errors to higher order moments (2nd estimate 103 worse than 1st!)modifying Newton-Girard formulas boils down to ad-hoc methods...ML and MMSE methods are prohibitively expensive.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 74 / 102

Page 144: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The Stieltjes transform method

Outline

1 Tools for Random Matrix TheoryClassical Random Matrix TheoryIntroduction to Large Dimensional Random Matrix TheoryThe Random Matrix PioneersThe Moment Approach and Free ProbabilityIntroduction of the Stieltjes TransformProperties of the Asymptotic Support and Spiked ModelsSummary of what we know and what is left to be done

2 Random Matrix Theory and Signal Source SensingSmall Dimensional AnalysisLarge Dimensional Random Matrix Analysis

3 Random Matrix Theory and Multi-Source Power EstimationOptimal detectorThe moment methodThe Stieltjes transform method

4 Random Matrix Theory and Failure Detection in Complex SystemsRandom matrix models of local failures in sensor networksFailure detection and localization

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 75 / 102

Page 145: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The Stieltjes transform method

Limiting spectrum of the sample covariance matrix

Recall the model

Y =[HP

12 σIN

] [XW

]very similar to a sample covariance matrix.

for simplicity of analysis, consider the sample covariance matrix model

Y∆=T

12 X ∈ CN×n, BN =

1n

YYH ∈ CN×N , BN =1n

YHY ∈ Cn×n

where T ∈ CN×N has eigenvalues t1, . . . , tK , tk with multiplicity Nk and X ∈ CN×n is i.i.d. zeromean, variance 1.

If F T ⇒ T , then mFBN (z) = mBN (z)a.s.−→ mF (z) such that

mF (z) =

(c∫

t1 + tmF (z)

dT (t)− z

)−1

⇔ mT(−1/mF (z)

)= −zmF (z)mF (z)

with mF (z) = cmF (z) + (c − 1) 1z and N/n→ c.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 76 / 102

Page 146: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The Stieltjes transform method

Limiting spectrum of the sample covariance matrix

Recall the model

Y =[HP

12 σIN

] [XW

]very similar to a sample covariance matrix.

for simplicity of analysis, consider the sample covariance matrix model

Y∆=T

12 X ∈ CN×n, BN =

1n

YYH ∈ CN×N , BN =1n

YHY ∈ Cn×n

where T ∈ CN×N has eigenvalues t1, . . . , tK , tk with multiplicity Nk and X ∈ CN×n is i.i.d. zeromean, variance 1.

If F T ⇒ T , then mFBN (z) = mBN (z)a.s.−→ mF (z) such that

mF (z) =

(c∫

t1 + tmF (z)

dT (t)− z

)−1

⇔ mT(−1/mF (z)

)= −zmF (z)mF (z)

with mF (z) = cmF (z) + (c − 1) 1z and N/n→ c.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 76 / 102

Page 147: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The Stieltjes transform method

Limiting spectrum of the sample covariance matrix

Recall the model

Y =[HP

12 σIN

] [XW

]very similar to a sample covariance matrix.

for simplicity of analysis, consider the sample covariance matrix model

Y∆=T

12 X ∈ CN×n, BN =

1n

YYH ∈ CN×N , BN =1n

YHY ∈ Cn×n

where T ∈ CN×N has eigenvalues t1, . . . , tK , tk with multiplicity Nk and X ∈ CN×n is i.i.d. zeromean, variance 1.

If F T ⇒ T , then mFBN (z) = mBN (z)a.s.−→ mF (z) such that

mF (z) =

(c∫

t1 + tmF (z)

dT (t)− z

)−1

⇔ mT(−1/mF (z)

)= −zmF (z)mF (z)

with mF (z) = cmF (z) + (c − 1) 1z and N/n→ c.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 76 / 102

Page 148: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The Stieltjes transform method

Complex integration

From Cauchy integral formula, with Ck a contour enclosing only tk (negatively oriented),

tk =1

2πi

∮Ck

ω

tk − ωdω =

12πi

∮Ck

1Nk

K∑j=1

Njω

tj − ωdω =

N2πiNk

∮Ck

ωmT (ω)dω.

After the variable change ω = −1/mF (z),

tk =NNk

12πi

∮CF,k

zmF (z)m′F (z)

m2F (z)

dz,

When the system dimensions are large,

mF (z) ' mBN (z)∆=

1N

N∑k=1

1λk − z

, with (λ1, . . . , λN ) = eig(BN ) = eig(1n

YYH).

Dominated convergence arguments then show

tk − tka.s.−→ 0 with tk =

NNk

12πi

∮CF,k

zmBN (z)m′BN

(z)

m2BN

(z)dz.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 77 / 102

Page 149: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The Stieltjes transform method

Complex integration

From Cauchy integral formula, with Ck a contour enclosing only tk (negatively oriented),

tk =1

2πi

∮Ck

ω

tk − ωdω =

12πi

∮Ck

1Nk

K∑j=1

Njω

tj − ωdω =

N2πiNk

∮Ck

ωmT (ω)dω.

After the variable change ω = −1/mF (z),

tk =NNk

12πi

∮CF,k

zmF (z)m′F (z)

m2F (z)

dz,

When the system dimensions are large,

mF (z) ' mBN (z)∆=

1N

N∑k=1

1λk − z

, with (λ1, . . . , λN ) = eig(BN ) = eig(1n

YYH).

Dominated convergence arguments then show

tk − tka.s.−→ 0 with tk =

NNk

12πi

∮CF,k

zmBN (z)m′BN

(z)

m2BN

(z)dz.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 77 / 102

Page 150: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The Stieltjes transform method

Complex integration

From Cauchy integral formula, with Ck a contour enclosing only tk (negatively oriented),

tk =1

2πi

∮Ck

ω

tk − ωdω =

12πi

∮Ck

1Nk

K∑j=1

Njω

tj − ωdω =

N2πiNk

∮Ck

ωmT (ω)dω.

After the variable change ω = −1/mF (z),

tk =NNk

12πi

∮CF,k

zmF (z)m′F (z)

m2F (z)

dz,

When the system dimensions are large,

mF (z) ' mBN (z)∆=

1N

N∑k=1

1λk − z

, with (λ1, . . . , λN ) = eig(BN ) = eig(1n

YYH).

Dominated convergence arguments then show

tk − tka.s.−→ 0 with tk =

NNk

12πi

∮CF,k

zmBN (z)m′BN

(z)

m2BN

(z)dz.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 77 / 102

Page 151: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The Stieltjes transform method

Complex integration

From Cauchy integral formula, with Ck a contour enclosing only tk (negatively oriented),

tk =1

2πi

∮Ck

ω

tk − ωdω =

12πi

∮Ck

1Nk

K∑j=1

Njω

tj − ωdω =

N2πiNk

∮Ck

ωmT (ω)dω.

After the variable change ω = −1/mF (z),

tk =NNk

12πi

∮CF,k

zmF (z)m′F (z)

m2F (z)

dz,

When the system dimensions are large,

mF (z) ' mBN (z)∆=

1N

N∑k=1

1λk − z

, with (λ1, . . . , λN ) = eig(BN ) = eig(1n

YYH).

Dominated convergence arguments then show

tk − tka.s.−→ 0 with tk =

NNk

12πi

∮CF,k

zmBN (z)m′BN

(z)

m2BN

(z)dz.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 77 / 102

Page 152: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The Stieltjes transform method

Complex integration

From Cauchy integral formula, with Ck a contour enclosing only tk (negatively oriented),

tk =1

2πi

∮Ck

ω

tk − ωdω =

12πi

∮Ck

1Nk

K∑j=1

Njω

tj − ωdω =

N2πiNk

∮Ck

ωmT (ω)dω.

After the variable change ω = −1/mF (z),

tk =NNk

12πi

∮CF,k

zmF (z)m′F (z)

m2F (z)

dz,

When the system dimensions are large,

mF (z) ' mBN (z)∆=

1N

N∑k=1

1λk − z

, with (λ1, . . . , λN ) = eig(BN ) = eig(1n

YYH).

Dominated convergence arguments then show

tk − tka.s.−→ 0 with tk =

NNk

12πi

∮CF,k

zmBN (z)m′BN

(z)

m2BN

(z)dz.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 77 / 102

Page 153: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The Stieltjes transform method

Complex integration

From Cauchy integral formula, with Ck a contour enclosing only tk (negatively oriented),

tk =1

2πi

∮Ck

ω

tk − ωdω =

12πi

∮Ck

1Nk

K∑j=1

Njω

tj − ωdω =

N2πiNk

∮Ck

ωmT (ω)dω.

After the variable change ω = −1/mF (z),

tk =NNk

12πi

∮CF,k

zmF (z)m′F (z)

m2F (z)

dz,

When the system dimensions are large,

mF (z) ' mBN (z)∆=

1N

N∑k=1

1λk − z

, with (λ1, . . . , λN ) = eig(BN ) = eig(1n

YYH).

Dominated convergence arguments then show

tk − tka.s.−→ 0 with tk =

NNk

12πi

∮CF,k

zmBN (z)m′BN

(z)

m2BN

(z)dz.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 77 / 102

Page 154: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The Stieltjes transform method

Where does the contour go?

Intuition:mF (z) is defined outside the support of F

on the real axis, m′F (z) =∫ 1

(t−z)2 dF (t) > 0

it therefore has a local growing inverse outside the support of F

notice that mF (z) has a closed-form inverse

zF (m) = −1m

+ c∫

t1 + tm

dT (t)

It can be shown that zF (m), m < 0, is growing if and only if its image is outside the support of F .

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 78 / 102

Page 155: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The Stieltjes transform method

Where does the contour go?

Intuition:mF (z) is defined outside the support of F

on the real axis, m′F (z) =∫ 1

(t−z)2 dF (t) > 0

it therefore has a local growing inverse outside the support of F

notice that mF (z) has a closed-form inverse

zF (m) = −1m

+ c∫

t1 + tm

dT (t)

It can be shown that zF (m), m < 0, is growing if and only if its image is outside the support of F .

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 78 / 102

Page 156: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The Stieltjes transform method

Where does the contour go?

Intuition:mF (z) is defined outside the support of F

on the real axis, m′F (z) =∫ 1

(t−z)2 dF (t) > 0

it therefore has a local growing inverse outside the support of F

notice that mF (z) has a closed-form inverse

zF (m) = −1m

+ c∫

t1 + tm

dT (t)

It can be shown that zF (m), m < 0, is growing if and only if its image is outside the support of F .

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 78 / 102

Page 157: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The Stieltjes transform method

Inverse formula for the Stieltjes transform

−1 − 13 − 1

70

1

3

7

m

z F(m

)

zF (m), m ∈ B

Support of F

Figure: zF (m), with F the l.s.d. of BN = XHN TN XN with TN diagonal composed of three evenly weighted masses

in 1, 3 and 7. The support of F is read on the vertical axis, whenever xF (m) is not increasing.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 79 / 102

Page 158: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The Stieltjes transform method

Playing with the asymptotes. . .

denote x−k , x+k two points on either side of cluster k in F such that x−k = zF (m−k ) and

x+k = zF (m+

k ).

from the asymptotes, we observe that

tk−1 < −1

m−k< tk < −

1m+

k

< tk+1

we can therefore take a contour CF ,k that crosses the real line at − 1m−k

and at − 1m+

kand is

outside the real line everywhere else.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 80 / 102

Page 159: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The Stieltjes transform method

Playing with the asymptotes. . .

denote x−k , x+k two points on either side of cluster k in F such that x−k = zF (m−k ) and

x+k = zF (m+

k ).

from the asymptotes, we observe that

tk−1 < −1

m−k< tk < −

1m+

k

< tk+1

we can therefore take a contour CF ,k that crosses the real line at − 1m−k

and at − 1m+

kand is

outside the real line everywhere else.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 80 / 102

Page 160: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The Stieltjes transform method

Playing with the asymptotes. . .

denote x−k , x+k two points on either side of cluster k in F such that x−k = zF (m−k ) and

x+k = zF (m+

k ).

from the asymptotes, we observe that

tk−1 < −1

m−k< tk < −

1m+

k

< tk+1

we can therefore take a contour CF ,k that crosses the real line at − 1m−k

and at − 1m+

kand is

outside the real line everywhere else.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 80 / 102

Page 161: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The Stieltjes transform method

Termination

X. Mestre, “Improved estimation of eigenvalues and eigenvectors of covariance matrices usingtheir sample estimates,” IEEE trans. on Information Theory, vol. 54, no. 11, pp. 5113-5129, 2008.

If remains to compute the integral from residue calculus.

tk =NNk

12πi

∮CF,k

zmBN (z)m′BN

(z)

m2BN

(z)dz.

From exact separation (Bai and Silverstein, 1998), CF ,k encloses exactly the “expected”eigenvalues, almost surely for all large N.

The integral gives the estimator

tk =n

Nk

∑m∈Nk

(λm − µm)

with Nk the indexes of cluster k and µ1 ≤ . . . ≤ µN are the ordered eigenvalues of the matrixdiag(λ)− 1

n

√λ√λ

T, λ = (λ1, . . . , λN )T.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 81 / 102

Page 162: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The Stieltjes transform method

Termination

X. Mestre, “Improved estimation of eigenvalues and eigenvectors of covariance matrices usingtheir sample estimates,” IEEE trans. on Information Theory, vol. 54, no. 11, pp. 5113-5129, 2008.

If remains to compute the integral from residue calculus.

tk =NNk

12πi

∮CF,k

zmBN (z)m′BN

(z)

m2BN

(z)dz.

From exact separation (Bai and Silverstein, 1998), CF ,k encloses exactly the “expected”eigenvalues, almost surely for all large N.

The integral gives the estimator

tk =n

Nk

∑m∈Nk

(λm − µm)

with Nk the indexes of cluster k and µ1 ≤ . . . ≤ µN are the ordered eigenvalues of the matrixdiag(λ)− 1

n

√λ√λ

T, λ = (λ1, . . . , λN )T.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 81 / 102

Page 163: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The Stieltjes transform method

Application to the current model

R. Couillet, J. W. Silverstein, Z. Bai, M. Debbah, “Eigen-Inference for Energy Estimation of MultipleSources,” IEEE Trans. on Inf. Theory, vol. 57, no. 4, pp. 2420-2439, 2011.

Extending Y with zeros, our model is a “double sample covariance matrix”

Y︸︷︷︸(N+n)×M

=

[HP

12 σIN

0 0

]︸ ︷︷ ︸

(N+n)×(N+n)

[XW

]︸︷︷︸

(N+n)×M

.

Limiting distribution of 1M YYH

Theorem (l.s.d. of BN )

Let BN = 1M YYH with eigenvalues λ1, . . . , λN . Denote mBN

(z)∆= 1

M∑M

k=11

λk−z , with λi = 0 fori > N. Then, for M/N → c, N/nk → ck , N/n→ c0, for any z ∈ C+,

mBN(z)

a.s.−→ mF (z)

with mF (z) the unique solution in C+ of

1mF (z)

= −σ2 +1

f (z)

[c0 − 1

c0+ mP

(−

1f (z)

)], with f (z) = (c − 1)mF (z)− czmF (z)2.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 82 / 102

Page 164: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The Stieltjes transform method

Application to the current model (2)

R. Couillet, J. W. Silverstein, Z. Bai, M. Debbah, “Eigen-Inference for Energy Estimation of MultipleSources,” IEEE Trans. on Inf. Theory, vol. 57, no. 4, pp. 2420-2439, 2011.

estimator calculus

Theorem (Estimator of P1, . . . ,PK )

Let BN ∈ CN×N be defined as above and λ = (λ1, . . . , λN ), λ1 < . . . < λN . Assume thatasymptotic cluster separability condition is fulfilled for some k. Then, as N, n, M →∞,

Pk − Pka.s.−→ 0,

where

Pk =NM

nk (M − N)

∑i∈Nk

(ηi − µi )

with Nk the set indexing the eigenvalues in cluster k of F , η1 < . . . < ηN the eigenvalues ofdiag(λ)− 1

N

√λ√λ

Tand µ1 < . . . < µN the eigenvalues of diag(λ)− 1

M

√λ√λ

T.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 83 / 102

Page 165: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The Stieltjes transform method

Remarks

solution is computationally simple, explicit, and the final formula compact.cluster separability condition is fundamental. This requires

for all other parameters fixed, the Pk cannot be too close to one another: source separation problem.for all other parameters fixed, σ2 must be kept low: low SNR undecidability problem.for all other parameters fixed, M/N cannot be too low: sample deficiency issue (not such an issuethough).for all other parameters fixed, N/n cannot be too low: diversity issue.

exact spectrum separability is an essential ingredient (known for very few models to this day).

0.1 1 3 100

0.025

0.05

0.075

0.1

Eigenvalues of YYH

Den

sity

Eigenvalues of BN = 1M YYH

Limiting spectrum of BN

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 84 / 102

Page 166: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The Stieltjes transform method

Remarks

solution is computationally simple, explicit, and the final formula compact.cluster separability condition is fundamental. This requires

for all other parameters fixed, the Pk cannot be too close to one another: source separation problem.for all other parameters fixed, σ2 must be kept low: low SNR undecidability problem.for all other parameters fixed, M/N cannot be too low: sample deficiency issue (not such an issuethough).for all other parameters fixed, N/n cannot be too low: diversity issue.

exact spectrum separability is an essential ingredient (known for very few models to this day).

0.1 1 3 100

0.025

0.05

0.075

0.1

Eigenvalues of YYH

Den

sity

Eigenvalues of BN = 1M YYH

Limiting spectrum of BN

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 84 / 102

Page 167: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The Stieltjes transform method

Stieltjes transform method vs. optimum

1 40

0.2

0.4

0.6

0.8

1

Powers

Dis

trib

utio

nfu

nctio

n

Optimum (ML)Optimum (MMSE)Stieltjes transform method

MSE P1 P2Opt. MMSE 0.1239 0.1278

Stieltjes 0.1514 0.1332

Figure: Distribution function for the detection of two power sources, P1 = 1, P2 = 4, n1 = n2 = 1,M = N = 16.Optimum against Stieltjes transform method.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 85 / 102

Page 168: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The Stieltjes transform method

Stieltjes transform method vs. conventional method

116

14

10

5

10

15

20

25

Cluster means

Den

sity

116

14

10

5

10

15

Estimated PkD

ensi

ty

Figure: Histogram of the cluster-mean approach and of Pk for k ∈ {1, 2, 3}, P1 = 1/16, P2 = 1/4, P3 = 1,n1 = n2 = n3 = 4 antennas per user, N = 24 sensors, M = 128 samples and SNR = 20 dB.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 86 / 102

Page 169: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The Stieltjes transform method

Performance comparison

−5 0 5 10 15 20 25 30−20

−15

−10

−5

0

SNR [dB]

Nor

mal

ized

mea

nsq

uare

erro

r[dB

]

Stieltjes transform estimator

Moment estimator

Cluster average estimator

Figure: Normalized mean square error of largest estimated power P3, P1 = 1/16,P2 = 1/4,P3 = 1,n1 = n2 = n3 = 4 ,N = 24, M = 128. Comparison between classical, moment and Stieltjes transformapproaches.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 87 / 102

Page 170: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Multi-Source Power Estimation The Stieltjes transform method

Related bibliography

N. El Karoui, “Spectrum estimation for large dimensional covariance matrices using randommatrix theory,” Annals of Statistics, vol. 36, no. 6, pp. 2757-2790, 2008.

N. R. Rao, J. A. Mingo, R. Speicher, A. Edelman, “Statistical eigen-inference from largeWishart matrices,” Annals of Statistics, vol. 36, no. 6, pp. 2850-2885, 2008.

R. Couillet, M. Debbah, “Free deconvolution for OFDM multicell SNR detection”, PIMRC2008, Cannes, France.

X. Mestre, “Improved estimation of eigenvalues and eigenvectors of covariance matricesusing their sample estimates,” IEEE trans. on Information Theory, vol. 54, no. 11, pp.5113-5129, 2008.

R. Couillet, J. W. Silverstein, M. Debbah, “Eigen-inference for multi-source power estimation”,submitted to ISIT 2010.

Z. D. Bai, J. W. Silverstein, “No eigenvalues outside the support of the limiting spectraldistribution of large-dimensional sample covariance matrices,” The Annals of Probability, vol.26, no.1 pp. 316-345, 1998.

Z. D. Bai, J. W. Silverstein, “CLT of linear spectral statistics of large dimensional samplecovariance matrices,” Annals of Probability, vol. 32, no. 1A, pp. 553-605, 2004.

J. Silverstein, Z. Bai, “Exact separation of eigenvalues of large dimensional samplecovariance matrices” Annals of Probability, vol. 27, no. 3, pp. 1536-1555, 1999.

Ø. Ryan, M. Debbah, “Free Deconvolution for Signal Processing Applications,” IEEEInternational Symposium on Information Theory, pp. 1846-1850, 2007.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 88 / 102

Page 171: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Failure Detection in Complex Systems

Outline

1 Tools for Random Matrix TheoryClassical Random Matrix TheoryIntroduction to Large Dimensional Random Matrix TheoryThe Random Matrix PioneersThe Moment Approach and Free ProbabilityIntroduction of the Stieltjes TransformProperties of the Asymptotic Support and Spiked ModelsSummary of what we know and what is left to be done

2 Random Matrix Theory and Signal Source SensingSmall Dimensional AnalysisLarge Dimensional Random Matrix Analysis

3 Random Matrix Theory and Multi-Source Power EstimationOptimal detectorThe moment methodThe Stieltjes transform method

4 Random Matrix Theory and Failure Detection in Complex SystemsRandom matrix models of local failures in sensor networksFailure detection and localization

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 89 / 102

Page 172: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Failure Detection in Complex Systems Random matrix models of local failures in sensor networks

Outline

1 Tools for Random Matrix TheoryClassical Random Matrix TheoryIntroduction to Large Dimensional Random Matrix TheoryThe Random Matrix PioneersThe Moment Approach and Free ProbabilityIntroduction of the Stieltjes TransformProperties of the Asymptotic Support and Spiked ModelsSummary of what we know and what is left to be done

2 Random Matrix Theory and Signal Source SensingSmall Dimensional AnalysisLarge Dimensional Random Matrix Analysis

3 Random Matrix Theory and Multi-Source Power EstimationOptimal detectorThe moment methodThe Stieltjes transform method

4 Random Matrix Theory and Failure Detection in Complex SystemsRandom matrix models of local failures in sensor networksFailure detection and localization

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 90 / 102

Page 173: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Failure Detection in Complex Systems Random matrix models of local failures in sensor networks

Failure detection

1

2

3

45

6

7

80.90

0.88

0.86

0.85

0.94

0.94

0.88

0.79

×

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 91 / 102

Page 174: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Failure Detection in Complex Systems Random matrix models of local failures in sensor networks

Node failure detection in sensor networks

Consider the modely = Hθ + σw

with H ∈ CN×p deterministic, θ ∼ CN (0, Ip), w ∼ CN (0, IN ).

In particular E[y] = 0 and E[yyH] = R∆=HHH + σ2IN

With s = R−12 y,

E[ssH] = IN .

Upon failure of sensor k , y becomes

y′ = (IN − ek eHk )Hθ + σk ek e∗kθ

′ + σw

for some noise variance σ2k .

Now E[y′] = 0 and

E[y′y′H] = (IN − ek eHk )HHH(IN − ek eH

k ) + σ2k ek eH

k + σ2IN .

With now s = R−12 y′,

E[ssH] = IN + Pk

with

Pk = −R−12 HHHek eH

k R−12 + R−

12 ek

[(eH

k HHHek + σ2k )eH

k R−12 − eH

k HHHR−12

]of rank-2 (image of Pk in Span(R−

12 ek ,R−

12 HHHek ))

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 92 / 102

Page 175: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Failure Detection in Complex Systems Random matrix models of local failures in sensor networks

Node failure detection in sensor networks

Consider the modely = Hθ + σw

with H ∈ CN×p deterministic, θ ∼ CN (0, Ip), w ∼ CN (0, IN ).

In particular E[y] = 0 and E[yyH] = R∆=HHH + σ2IN

With s = R−12 y,

E[ssH] = IN .

Upon failure of sensor k , y becomes

y′ = (IN − ek eHk )Hθ + σk ek e∗kθ

′ + σw

for some noise variance σ2k .

Now E[y′] = 0 and

E[y′y′H] = (IN − ek eHk )HHH(IN − ek eH

k ) + σ2k ek eH

k + σ2IN .

With now s = R−12 y′,

E[ssH] = IN + Pk

with

Pk = −R−12 HHHek eH

k R−12 + R−

12 ek

[(eH

k HHHek + σ2k )eH

k R−12 − eH

k HHHR−12

]of rank-2 (image of Pk in Span(R−

12 ek ,R−

12 HHHek ))

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 92 / 102

Page 176: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Failure Detection in Complex Systems Random matrix models of local failures in sensor networks

Sudden parameter change detection in sensor networks

Upon sudden change of parameter θk ,

y′ = H(Ip + αk ek e∗k )θ + µk Hek + σw

ThenE[y′y′H] = H(Ip + [µ2

k + (1 + αk )2 − 1]ek eHk )HH + σ2IN .

With R = HHH + σ2IN and s = R−12 y′,

E[ssH] = IN + Pk

withPk = [µ2

k + (1 + αk )2 − 1]R−12 Hek eH

k HHR−12 .

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 93 / 102

Page 177: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Failure Detection in Complex Systems Failure detection and localization

Outline

1 Tools for Random Matrix TheoryClassical Random Matrix TheoryIntroduction to Large Dimensional Random Matrix TheoryThe Random Matrix PioneersThe Moment Approach and Free ProbabilityIntroduction of the Stieltjes TransformProperties of the Asymptotic Support and Spiked ModelsSummary of what we know and what is left to be done

2 Random Matrix Theory and Signal Source SensingSmall Dimensional AnalysisLarge Dimensional Random Matrix Analysis

3 Random Matrix Theory and Multi-Source Power EstimationOptimal detectorThe moment methodThe Stieltjes transform method

4 Random Matrix Theory and Failure Detection in Complex SystemsRandom matrix models of local failures in sensor networksFailure detection and localization

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 94 / 102

Page 178: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Failure Detection in Complex Systems Failure detection and localization

Classical approach

With K the number of failure scenarios, hypothesis test between:no failurefailure of type 1. . .failure of type K

Maximum-likelihood approach computationally constraining!

calculus cost ' O(N3K )

which iscalculus cost ' O(N3+m)

for m simultaneous node failures detection.

Ad-hoc approaches/PCA can reduce this amount

We propose here a “maximum-likelihood-type” method in

one SVD + O(K )

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 95 / 102

Page 179: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Failure Detection in Complex Systems Failure detection and localization

Classical approach

With K the number of failure scenarios, hypothesis test between:no failurefailure of type 1. . .failure of type K

Maximum-likelihood approach computationally constraining!

calculus cost ' O(N3K )

which iscalculus cost ' O(N3+m)

for m simultaneous node failures detection.

Ad-hoc approaches/PCA can reduce this amount

We propose here a “maximum-likelihood-type” method in

one SVD + O(K )

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 95 / 102

Page 180: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Failure Detection in Complex Systems Failure detection and localization

Classical approach

With K the number of failure scenarios, hypothesis test between:no failurefailure of type 1. . .failure of type K

Maximum-likelihood approach computationally constraining!

calculus cost ' O(N3K )

which iscalculus cost ' O(N3+m)

for m simultaneous node failures detection.

Ad-hoc approaches/PCA can reduce this amount

We propose here a “maximum-likelihood-type” method in

one SVD + O(K )

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 95 / 102

Page 181: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Failure Detection in Complex Systems Failure detection and localization

Classical approach

With K the number of failure scenarios, hypothesis test between:no failurefailure of type 1. . .failure of type K

Maximum-likelihood approach computationally constraining!

calculus cost ' O(N3K )

which iscalculus cost ' O(N3+m)

for m simultaneous node failures detection.

Ad-hoc approaches/PCA can reduce this amount

We propose here a “maximum-likelihood-type” method in

one SVD + O(K )

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 95 / 102

Page 182: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Failure Detection in Complex Systems Failure detection and localization

Failure detection and identification

R. Couillet and W. Hachem, “Local failure detection and identification in large sensor networks,”submitted to IEEE Transaction on Information Theory, 2011.

Upon reception of S = [s1, . . . , sn],Failure detection based on hypothesis test

H0: no failureH0: failure

If H0 is decided, multi-hypothesis test

Hk = “failure of type k ”

Detection test on largest eigenvalue λ1 of 1n SSH: for a false alarm rate η,

λ′1H0≶H0

(T2)−1(1− η)

with

λ′1 = N23λ1 − (1 +

√cN )2

(1 +√

cN )43 c

12N

and T2 the complex Tracy-Widom distribution.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 96 / 102

Page 183: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Failure Detection in Complex Systems Failure detection and localization

Failure detection and identification

R. Couillet and W. Hachem, “Local failure detection and identification in large sensor networks,”submitted to IEEE Transaction on Information Theory, 2011.

Upon reception of S = [s1, . . . , sn],Failure detection based on hypothesis test

H0: no failureH0: failure

If H0 is decided, multi-hypothesis test

Hk = “failure of type k ”

Detection test on largest eigenvalue λ1 of 1n SSH: for a false alarm rate η,

λ′1H0≶H0

(T2)−1(1− η)

with

λ′1 = N23λ1 − (1 +

√cN )2

(1 +√

cN )43 c

12N

and T2 the complex Tracy-Widom distribution.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 96 / 102

Page 184: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Failure Detection in Complex Systems Failure detection and localization

Failure localization

For localization, eigenvalues are poor statistics

Denote, in case of failure of type k

E[ssH] = IN + ωk uk,1uHk,1

(rank-1 perturbation for simplicity)

We use the eigenvector u1 corresponding to λ1, and

|uH1 uk,1|2

a.s.−→ ξ(ωk ) > 0

for k the failure index.

With the CLT on |uH1 uk,1|2 − ξ(ωk ), we have the estimator

k? = arg max1≤k≤K

f(√

N(|uH1 uk,1|2 − ξ(ωk ));σ2

k

)with f the Gaussian density.Test can be reinforced by including

projection statistics on other vectorsstatistics of eigenvaluestake the joint probability over multiple spikes.

Further generalizations are possible assuming unknown failure amplitude.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 97 / 102

Page 185: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Failure Detection in Complex Systems Failure detection and localization

Failure localization

For localization, eigenvalues are poor statistics

Denote, in case of failure of type k

E[ssH] = IN + ωk uk,1uHk,1

(rank-1 perturbation for simplicity)

We use the eigenvector u1 corresponding to λ1, and

|uH1 uk,1|2

a.s.−→ ξ(ωk ) > 0

for k the failure index.

With the CLT on |uH1 uk,1|2 − ξ(ωk ), we have the estimator

k? = arg max1≤k≤K

f(√

N(|uH1 uk,1|2 − ξ(ωk ));σ2

k

)with f the Gaussian density.Test can be reinforced by including

projection statistics on other vectorsstatistics of eigenvaluestake the joint probability over multiple spikes.

Further generalizations are possible assuming unknown failure amplitude.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 97 / 102

Page 186: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Failure Detection in Complex Systems Failure detection and localization

Failure localization

For localization, eigenvalues are poor statistics

Denote, in case of failure of type k

E[ssH] = IN + ωk uk,1uHk,1

(rank-1 perturbation for simplicity)

We use the eigenvector u1 corresponding to λ1, and

|uH1 uk,1|2

a.s.−→ ξ(ωk ) > 0

for k the failure index.

With the CLT on |uH1 uk,1|2 − ξ(ωk ), we have the estimator

k? = arg max1≤k≤K

f(√

N(|uH1 uk,1|2 − ξ(ωk ));σ2

k

)with f the Gaussian density.Test can be reinforced by including

projection statistics on other vectorsstatistics of eigenvaluestake the joint probability over multiple spikes.

Further generalizations are possible assuming unknown failure amplitude.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 97 / 102

Page 187: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Failure Detection in Complex Systems Failure detection and localization

Failure localization

For localization, eigenvalues are poor statistics

Denote, in case of failure of type k

E[ssH] = IN + ωk uk,1uHk,1

(rank-1 perturbation for simplicity)

We use the eigenvector u1 corresponding to λ1, and

|uH1 uk,1|2

a.s.−→ ξ(ωk ) > 0

for k the failure index.

With the CLT on |uH1 uk,1|2 − ξ(ωk ), we have the estimator

k? = arg max1≤k≤K

f(√

N(|uH1 uk,1|2 − ξ(ωk ));σ2

k

)with f the Gaussian density.Test can be reinforced by including

projection statistics on other vectorsstatistics of eigenvaluestake the joint probability over multiple spikes.

Further generalizations are possible assuming unknown failure amplitude.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 97 / 102

Page 188: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Failure Detection in Complex Systems Failure detection and localization

Performance results

100 150 200 250 300 350 4000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

n

Cor

rect

dete

ctio

n/lo

caliz

atio

nra

tes

CDR, FAR= 10−4

CLR, FAR= 10−4

CDR, FAR= 10−3

CLR, FAR= 10−3

CDR, FAR= 10−2

CLR, FAR= 10−2

Figure: Correct detection (CDR) and localization (CLR) rates for different false alarm rates (FAR) and different n,worst case node failure in a 100-node network.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 98 / 102

Page 189: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Failure Detection in Complex Systems Failure detection and localization

Selected authors’ recent bibliography

Articles in Journals,R. Couillet, W. Hachem, “Local failure detection and identification in large sensor networks,” IEEETransactions on Information Theory, submitted.R. Couillet, J. Hoydis, M. Debbah, “Random Unitary Beamforming over Correlated Fading Channels,”IEEE Transactions on Information Theory, submitted.R. Couillet, J. Hoydis, M. Debbah, “A deterministic equivalent approach to the performance analysisof isometric random precoded systems,” IEEE Transactions on Information Theory, submitted.R. Couillet, J. W. Silverstein, Z. Bai, M. Debbah, “Eigen-Inference for Energy Estimation of MultipleSources,” IEEE Trans. on Information Theory, 2010, to be published.R. Couillet, J. W. Silverstein, M. Debbah, “A Deterministic Equivalent for the Capacity Analysis ofCorrelated Multi-User MIMO Channels,” IEEE Trans. on Information Theory, to be published.P. Bianchi, J. Najim, M. Maida, M. Debbah, “Performance of Some Eigen-based Hypothesis Tests forCollaborative Sensing,” IEEE Trans. on Information Theory, to be published.R. Couillet, M. Debbah, “A Bayesian Framework for Collaborative Multi-Source Signal Sensing,” IEEETrans. on Signal Processing, vol. 58, no. 10, pp. 5186-5195 ,2010.S. Wagner, R. Couillet, M. Debbah, D. Slock, “Large System Analysis of Linear Precoding in MISOBroadcast Channels with Limited Feedback,” IEEE Trans. on Information Theory, 2010, submitted.A. Masucci, Ø. Ryan, S. Yang, M. Debbah, “Gaussian Finite Dimensional Statistical Inference,” IEEETrans. on Information Theory, 2009, submitted.Ø. Ryan, M. Debbah, “Asymptotic Behaviour of Random Vandermonde Matrices with Entries on theUnit Circle,” IEEE Trans. on Information Theory, vol. 55, no. 7 pp. 3115-3148, July 2009.M. Debbah, R. Muller, “MIMO channel modeling and the principle of maximum entropy,” IEEE Trans.on Information Theory, vol. 51, no. 5, pp. 1667-1690, 2005.

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 99 / 102

Page 190: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Failure Detection in Complex Systems Failure detection and localization

Selected authors’ recent bibliography

Articles in International ConferencesA. Kammoun, R. Couillet, J. Najim, M. Debbah, “A G-estimator for rate adaption in cognitive radios,”submitted to IEEE International Symposium on Information Theory, St Petersburg, Russia, 2011.J. Yao, R. Couillet, J. Najim, E. Moulines, M. Debbah, “CLT for eigen-inference methods in cognitiveradios,” IEEE International Conf. on Acoustics, Speech and Signal Proc., Prague, Czech Rep., 2011.J. Hoydis, R. Couillet, M. Debbah, “Deterministic Equivalents for the Performance Analysis ofIsometric Random Precoded Systems,” IEEE International Conference on Communications, Kyoto,Japan, 2011.J. Hoydis, J. Najim, R. Couillet, M. Debbah, “Fluctuations of the Mutual Information in LargeDistributed Antenna Systems with Colored Noise,” Forty-Eighth Annual Allerton Conference onCommunication, Control, and Computing, Allerton, IL, USA, 2010.R. Couillet, S. Wagner, M. Debbah, A. Silva, “The Space Frontier: Physical Limits of Multiple AntennaInformation Transfer”, Inter-Perf 2008, Athens, Greece. BEST STUDENT PAPER AWARD.R. Couillet, M. Debbah, V. Poor, “Self-organized spectrum sharing in large MIMO multiple accesschannels”, submitted to ISIT 2010.L. S. Cardoso, M. Debbah, P. Bianchi, and J. Najim, “Cooperative spectrum sensing using randommatrix theory,” 3rd International Symposium on Wireless Pervasive Computing (ISWPC), 2008.R. Couillet, M. Debbah, “Uplink capacity of self-organizing clustered orthogonal CDMA networks inflat fading channels”, ITW 2009 Fall, Taormina, Sicily.

Book ChaptersMathematical Foundations for Signal Processing, Communications and Networking

Editors: T. Chen, D. Rajan and E. SerpedinChapter title: “Random matrix theory”Chapter authors: R. Couillet and M. DebbahPublisher: CRC Press, Taylor & and Francis GroupYear: 2011 (to appear)

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 100 / 102

Page 191: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Failure Detection in Complex Systems Failure detection and localization

Coming up soon...

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 101 / 102

Page 192: Random Matrix Theory for Signal Processing Applications · Random Matrix Theory for Signal Processing Applications Romain Couillet1, Merouane Debbah´ 2 1EDF Chair on System Sciences

Random Matrix Theory and Failure Detection in Complex Systems Failure detection and localization

Coming up soon...

Romain Couillet, Merouane Debbah, Random Matrix Methods for Wireless Communications.

1 Theoretical aspects1 Random matrices2 The Stieltjes transform method3 Free probability theory4 Combinatoric approaches5 Deterministic equivalents6 Spectrum analysis7 Eigen-inference8 Extreme eigenvalues9 Summary and partial conclusions

2 Applications to wireless communications1 Introduction to applications in telecommunications2 System performance of CDMA technologies3 Performance of multiple antennas systems4 Rate performance in multiple access and broadcast channels5 Performance of multi-cellular and relay networks6 Detection7 Estimation8 System modelling9 Perspectives

10 Conclusion

R. Couillet (Supelec) Random Matrix Theory for Signal Processing Applications 22/05/2011 102 / 102