spacing distributions for real symmetric 2 2 ensembles · pdf filespacing distributions for...

13
IOP PUBLISHING JOURNAL OF PHYSICS A: MATHEMATICAL AND THEORETICAL J. Phys. A: Math. Theor. 42 (2009) 485102 (13pp) doi:10.1088/1751-8113/42/48/485102 Spacing distributions for real symmetric 2 × 2 generalized Gaussian ensembles M V Berry 1 and P Shukla 2 1 H H Wills Physics Laboratory, Tyndall Avenue, Bristol BS8 1TL, UK 2 Department of Physics, Indian Institute of Technology, Kharagpur, India Received 11 August 2009 Published 12 November 2009 Online at stacks.iop.org/JPhysA/42/485102 Abstract For ensembles of 2 × 2 real symmetric matrices, the normalized spacing distributions P(S) form a family whose parameter space is the unit cube with coordinates determined by the means and variances of the diagonal and off-diagonal elements. The cube contains a variety of spacing distributions that are calculated analytically; they include the Wigner–Poisson transition, distributions with singularities and Gaussians. Unfolding is superfluous for 2 × 2 matrices, but it can be implemented, giving rise to a further variety of spacing distributions, some surprising. PACS numbers: 02.70.rr, 02.65.w, 05.45.mt 1. Introduction 2 × 2 matrices are useful models for many phenomena, including lens and polarization optics [1, 2], geometric phases [3], PT symmetry [4], quantum transitions [57], decoherence [8] and localization [9, 10]. Sometimes, the physics involves individual matrices, but more frequently insight is gained by considering families of matrices, for example those surrounding and including a degenerate matrix, where, for two parameters, the spectrum possesses a conical (diabolical) singularity [11]. In addition, average short-range spectral properties of statistical ensembles of N × N matrices can be illuminated by 2 × 2 examples [12]. Here we will study families of ensembles of 2 × 2 real symmetric matrices whose elements are independently Gauss-distributed, and we will calculate the unique fluctuation statistic for this family, namely the probability distribution P(S) of the difference between the two eigenvalues, normalized so that S is measured in units of the mean spacing. In the most familiar individual ensemble in this family, the elements have zero mean and the variance of the diagonal elements is twice that of the off-diagonals; this represents the 2 × 2 Gaussian orthogonal ensemble (GOE), and the spacing distribution, known as the Wigner surmise [12] of random-matrix theory, is a good approximation to that for N × N matrices when N 2. In the generalized ensembles we consider here, the diagonal and off-diagonal elements have arbitrary means and variances, and these quantities are the parameters of the family. We 1751-8113/09/485102+13$30.00 © 2009 IOP Publishing Ltd Printed in the UK 1

Upload: vothuan

Post on 18-Mar-2018

216 views

Category:

Documents


2 download

TRANSCRIPT

IOP PUBLISHING JOURNAL OF PHYSICS A: MATHEMATICAL AND THEORETICAL

J. Phys. A: Math. Theor. 42 (2009) 485102 (13pp) doi:10.1088/1751-8113/42/48/485102

Spacing distributions for real symmetric 2 × 2generalized Gaussian ensembles

M V Berry1 and P Shukla2

1 H H Wills Physics Laboratory, Tyndall Avenue, Bristol BS8 1TL, UK2 Department of Physics, Indian Institute of Technology, Kharagpur, India

Received 11 August 2009Published 12 November 2009Online at stacks.iop.org/JPhysA/42/485102

AbstractFor ensembles of 2 × 2 real symmetric matrices, the normalized spacingdistributions P(S) form a family whose parameter space is the unit cubewith coordinates determined by the means and variances of the diagonal andoff-diagonal elements. The cube contains a variety of spacing distributionsthat are calculated analytically; they include the Wigner–Poisson transition,distributions with singularities and Gaussians. Unfolding is superfluous for2 × 2 matrices, but it can be implemented, giving rise to a further variety ofspacing distributions, some surprising.

PACS numbers: 02.70.rr, 02.65.−w, 05.45.mt

1. Introduction

2 × 2 matrices are useful models for many phenomena, including lens and polarization optics[1, 2], geometric phases [3], PT symmetry [4], quantum transitions [5–7], decoherence [8] andlocalization [9, 10]. Sometimes, the physics involves individual matrices, but more frequentlyinsight is gained by considering families of matrices, for example those surrounding andincluding a degenerate matrix, where, for two parameters, the spectrum possesses a conical(diabolical) singularity [11]. In addition, average short-range spectral properties of statisticalensembles of N × N matrices can be illuminated by 2 × 2 examples [12].

Here we will study families of ensembles of 2 × 2 real symmetric matrices whoseelements are independently Gauss-distributed, and we will calculate the unique fluctuationstatistic for this family, namely the probability distribution P(S) of the difference between thetwo eigenvalues, normalized so that S is measured in units of the mean spacing. In the mostfamiliar individual ensemble in this family, the elements have zero mean and the variance ofthe diagonal elements is twice that of the off-diagonals; this represents the 2 × 2 Gaussianorthogonal ensemble (GOE), and the spacing distribution, known as the Wigner surmise [12]of random-matrix theory, is a good approximation to that for N × N matrices when N � 2.

In the generalized ensembles we consider here, the diagonal and off-diagonal elementshave arbitrary means and variances, and these quantities are the parameters of the family. We

1751-8113/09/485102+13$30.00 © 2009 IOP Publishing Ltd Printed in the UK 1

J. Phys. A: Math. Theor. 42 (2009) 485102 M V Berry and P Shukla

will find that there are three such parameters, and individual ensembles correspond to pointsin a unit cube. In section 2 we calculate P(S) for any point in the cube, and in section 3 weexplore special cases, corresponding to particular edges and faces.

For a spectrum of just two eigenvalues, the unfolding procedure that is necessary forlarger matrices [13] is superfluous, because P(S) can be calculated directly. Nevertheless,unfolding is interesting to explore for 2 × 2 matrices (section 4), not only for comparison withlarger matrices but also because it leads to some unexpected results: for traceless matrices,the unfolded spacing distribution is uniform on 0 < S < 2, whatever the statistics of the matrixelements.

There are several reasons for studying the 2 × 2 model. (i) The matrices in these ensemblesare caricatures of larger matrices more general than those of the GOE that have been widelyapplied in condensed-matter physics [14–17] and in studies of complexity [18–22]. (ii) Itgives the simplest models for the spacing distribution, which is the statistic most commonlymeasured experimentally. (iii) Only for 2 × 2 matrices it is possible to carry out detailedanalytical calculations. (iv) It is possible to study the complete parameter space, in contrastto larger matrices where there are many more parameters. (v) Even this simple generalizationof the Wigner surmise displays a variety of spacing distributions.

In the following, we denote all probability distributions by P, distinguished by the variablesappearing in its argument; thus P(a) da = P(b) db.

2. Calculation of the spacing distribution

The most general real symmetric matrix is

M = 1

2

(T + X Y

Y T − X

). (2.1)

The spacing of the two eigenvalues λ+ and λ−, namely

� = λ+ − λ− =√

X2 + Y 2, (2.2)

does not involve the trace T, which therefore plays no part in the direct calculation of thespacing distribution carried out in this section and the next. However, T will reappear insection 4 when we discuss the distribution of spacings for unfolded eigenvalues.

If we choose the elements X and Y to be Gauss-distributed and uncorrelated, with meansand variances

X, VX, Y , VY , (2.3)

these four quantities are the parameters of the ensemble before normalization.The distribution of unnormalized spacings is

P(�; X, Y , VX, VY ) = 1

2π√

VXVY

∫ ∞

−∞dX

∫ ∞

−∞dYδ(� −

√X2 + Y 2)

× exp

{−

((X − X)2

2VX

+(Y − Y )2

2VY

)}. (2.4)

It is convenient to introduce polar coordinates for X, Y, and integrate over the radial coordinate.Then we scale to introduce the following natural dimensionless parameters and new temporaryspacing variable:

ξ ≡ X

(VXVY )1/4, η ≡ Y

(VXVY )1/4, α ≡

√Vx

VY

, D ≡ �

(VXVY )1/4. (2.5)

2

J. Phys. A: Math. Theor. 42 (2009) 485102 M V Berry and P Shukla

In terms of D, the resulting spacing distribution now depends on only three parameters:

P(D; ξ, η, α) = D

2πexp

(−

(ξ 2

2α+

η2α

2

))

×∫ 2π

0dθ exp

(−D2

2

(cos2 θ

α+ α sin2 θ

)+ D

cos θ

α+ αη sin θ

)). (2.6)

Finally, we scale D by the mean spacing, which is

D(ξ, η, α) =∫ ∞

0dDDP(D) =exp

(−(ξ 2

2α+ αη2

2

))2√

×∫ 2π

0dθ

[A + (1 + A2) exp

(12A2

)(1 + erf

(A√

2

))](

cos2 θα

+ α sin2 θ)3/2 , (2.7)

where

A =ξ

αcos θ + αη sin θ√cos2 θ

α+ α sin2 θ

. (2.8)

Thus, the desired spacing distribution is

P (S; ξ, η, α) = S(D(ξ, η, α))2

2πexp

(−

(ξ 2

2α+

η2α

2

))

×∫ 2π

0dθ exp (F (θ; S, ξ, η, α)), (2.9)

where

F (θ; S, ξ, η, α) = S2(D(ξ, η, α))2

2

(cos2 θ

α+ α sin2 θ

)

+ SD (ξ, η, α)

cos θ

α+ αη sin θ

). (2.10)

The spacing distribution is invariant under each of the replacements ξ → −ξ, η →−η, {α, ξ, η} → {α−1, η, ξ }. Incorporating these symmetries, the ranges of the threeparameters are

0 < ξ < ∞, 0 < η < ∞, 0 � α � 1, (2.11)

corresponding to an infinite square (ξ , η) slab with unit height (α). With coordinates tanhξ and tanh η, this parameter space becomes the unit cube (figure 1), in which each pointrepresents one of the generalized Gaussian ensembles.

3. Special cases

Case A. Both means zero (X = Y = 0), arbitrary variances

This is ξ = η = 0, so the only parameter is α (see figure 1). Equations (2.7) and (2.9)simplify considerably; the quantity A in (2.8) is zero, and D can be expressed in terms ofhypergeometric functions:

3

J. Phys. A: Math. Theor. 42 (2009) 485102 M V Berry and P Shukla

tanhξ

tanhη

α

0

0

0

1

1

1

B

C

A

D

Wigner

Poisson

Figure 1. Parameter cube of the family of ensembles, indicating the special cases treated insection 3.

D(α, 0, 0) = 1

2√

∫ 2π

0dθ

1(cos2 θ

α+ α sin2 θ

)3/2

= 2√

π

(α−1 + α)3/2 2F1

(3

4,

5

4, 1,

(α−1 − α)2

(α−1 + α)2

)≈

√2

π(α−1 + α) +

(π − 8/π)

(α−1 + α),

(3.1)

in which the approximation in the last member is exact as α → 0 and α → 1 and the errornever exceeds 1.5% over the entire range of α.

P(S) (equation (2.9)) is simple too: in terms of the Bessel function I0,

P(S;α, 0, 0) = SD(α, 0, 0)2 exp{− 1

4S2D (α, 0, 0)2 (α + 1/α)}

× I0(

14S2D(α, 0, 0)2(α − 1/α)

). (3.2)

This special case has been derived before [23], and described as the quadratic Rayleigh–Ricedistribution.

A familiar limiting case is where both variances are equal; then

P(S; 1, 0, 0) = π

2S exp

(−1

4πS2

), (3.3)

reproducing the familiar Wigner surmise, displaying linear eigenvalue repulsion (as do allthe ensembles, except one that will be considered later in this section). The fact that(3.3) corresponds to equal variances for the diagonal and off-diagonal elements might seemunfamiliar. But if T in (2.1) has variance VT = VX = VY, the variance of the diagonal elementsis

Vdiag = VT +X = VT −X = VT + VX = 2VX, (3.4)

explaining the factor 2 in the usual formulations.The opposite limit is α = 0, for which

P(S; 0, 0, 0) = 2

πexp

(− 1

πS2

). (3.5)

Since this corresponds to a purely diagonal matrix, P(S; 0, 0, 0) is simply the distributionof distances between two independent Gauss-distributed numbers, that is, the equivalent of

4

J. Phys. A: Math. Theor. 42 (2009) 485102 M V Berry and P Shukla

0.0 0.5 1.0 1.5 2.0 2.5 3.00.0

0.2

0.4

0.6

0.8

0.0 0.5 1.0 1.5 2.0 2.5 3.00.0

0.2

0.4

0.6

0.8

0.0 0.5 1.0 1.5 2.0 2.5 3.00.0

0.2

0.4

0.6

0.8

0.0 0.5 1.0 1.5 2.0 2.5 3.00.0

0.2

0.4

0.6

0.8

S

P(S

;α,0

,0)

(a) (b)

(c) (d )

Figure 2. Spacing distribution P(S; α, 0, 0) (equation (3.2)) for (a) α = 0 (Poisson), (b) α = 0.1,(c) α = 0.3, (d) α = 1 (Wigner).

the Poisson distribution for 2 × 2 matrices, displaying no eigenvalue repulsion (the familiarexponential distribution emerges for diagonal N × N matrices as N increases).

Thus, the spacing distribution (3.2) represents an interpolation between the Poisson andWigner ensembles, as illustrated in figure 2. By choosing a non-Gaussian ensemble for thediagonal elements, it is possible [24, 25] to reproduce the familiar exponential form for thePoisson distribution for large matrices, but for 2 × 2 matrices the exponential form has nospecial status: only the lack of eigenvalue repulsion is significant.

Case B. Both variances equal (VX = VY ), arbitrary means

This is the α = 1 face of the parameter cube (see figure 1). The parameters ξ and η appearonly in the combination

ρ =√

ξ 2 + η2, (3.7)

and (2.7) and (2.9) give

D(1,√

ξ 2 + η2 = ρ) = 1

2

√π

2exp

(−1

4ρ2

) [(2 + ρ2)I0

(1

4ρ2

)+ ρ2I1

(1

4ρ2

)], (3.8)

and

P(S; 1,√

ξ 2 + η2 = ρ) = SD(ρ)2 exp{− 1

2 (ρ2 + S2D(ρ)2)}I0(SρD(ρ)). (3.9)

This looks similar to the quadratic Rayleigh–Rice distribution in (3.2) of case A but is in factvery different (except for ρ = 0) because the Bessel function involves S rather than S2; it isthe ordinary Rice distribution.

Figure 3 shows P(S; 1,√

ξ 2 + η2 = ρ) for several values of ρ. Now the transition is verydifferent: from Wigner at ρ = 0 via a Gaussian sharpening to a δ function:

P(S, 1,√

ξ 2 + η2 = ρ → ∞) → ρ√2π

exp(− 1

2ρ2(S − 1)2) → δ(S − 1). (3.10)

This limit is obvious, because if either mean is much larger than the common variance, thenthe spacing distribution before normalization by the mean spacing is a Gaussian centred ona large spacing, which when normalized gives P(S) as a narrow Gaussian centred on S = 1,corresponding to a rigid spectrum.

5

J. Phys. A: Math. Theor. 42 (2009) 485102 M V Berry and P Shukla

0.0 0.5 1.0 1.5 2.0 2.5 3.00.0

0.2

0.4

0.6

0.8

0.0 0.5 1.0 1.5 2.0 2.5 3.00.0

0.2

0.4

0.6

0.8

1.0

0.0 0.5 1.0 1.5 2.0 2.5 3.00.0

0.4

0.8

1.2

0.0 0.5 1.0 1.5 2.0 2.5 3.00.0

0.5

1.0

1.5

2.0

S

P(S

;1,ρ

= (

ξ2 +η2 )

(a) (b)

(c) (d )

Figure 3. Spacing distribution P(S; 1,√

ξ2 + η2 = ρ) (equation (3.9)) for (a) ρ = 0 (Wigner),(b) ρ = 2, (c) ρ = 3, (d) ρ = 5.

Case C. One variance zero, arbitrary means

Without loss of generality we can choose VX as the zero variance. Then from (2.5) this casecorresponds to α = 0, ξ = η = ∞ (see figure 1). However, it is easier to return to (2.4) andnote the simplification that this case is simply X = X, i.e. the X distribution is a δ function, so

P(�) = 1√2πVY

∫ ∞

−∞dYδ(� −

√X2 + Y 2) exp

{− 1

2

((Y − Y )2

VY

)}

= �(� − X)2�√2πVY (�2 − X2)

exp

{−

(�2 − X2 + Y 2

2VY

)}cosh

(Y

VY

√�2 − X2

), (3.11)

in which � denotes the unit step. Now there are two new natural parameters and a new naturaltemporary spacing variable:

ξ1 ≡ X√VY

= √αξ, η1 ≡ Y√

VY

= √αη, D1 ≡ �√

VY

= √αD. (3.12)

The mean spacing is

D1(ξ1, η1) =√

2

πexp

(−1

2η2

1

) ∫ ∞

0dτ

√τ 2 + ξ 2

1 exp

(−1

2τ 2

)cosh(τη1)

≈√

2

π+ ξ 2

1 + η21, (3.13)

in which the approximation is exact for ξ 1 = η1 = 0 and as ξ1 → ∞ or η1 → ∞, and theerror never exceeds 9%. The spacing distribution is (in an obvious notation)

P(S; 0, ξ1, η1) = �(S − |ξ1|/D1(ξ1, η1))2SD1(ξ1, η1)2√

2π(S2D1(ξ1, η1)2 − ξ 21 )

× exp

{−1

2(S2D1(ξ1, η1)

2 − ξ 21 + η2

1)

}cosh

(η1

√S2D1(ξ1, η1)2 − ξ 2

1

).

(3.14)

6

J. Phys. A: Math. Theor. 42 (2009) 485102 M V Berry and P Shukla

0.0 0.5 1.0 1.5 2.00.0

0.4

0.8

1.2

0.0 0.5 1.0 1.5 2.00.0

0.4

0.8

1.2

0.0 0.5 1.0 1.5 2.00.0

0.5

1.0

1.5

2.0

0.0 0.5 1.0 1.5 2.00.0

1.0

2.0

3.0

0.0 0.5 1.0 1.5 2.00.0

0.2

0.4

0.6

0.8

0.0 0.5 1.0 1.5 2.00.0

0.5

1.0

1.5

2.0

0.0 0.5 1.0 1.5 2.00.00.51.01.52.02.53.0

0.0 0.5 1.0 1.5 2.0012345

0.0 0.5 1.0 1.5 2.00.0

0.2

0.4

0.6

0.8

0.0 0.5 1.0 1.5 2.0012345

0.0 0.5 1.0 1.5 2.0012345

0.0 0.5 1.0 1.5 2.0012345

0.0 0.5 1.0 1.5 2.00.0

0.2

0.4

0.6

0.8

0.0 0.5 1.0 1.5 2.0012345

0.0 0.5 1.0 1.5 2.0012345

0.0 0.5 1.0 1.5 2.0012345

P(S

;0,ξ

1,η1)

S

{0,3}

{0,0}

{0,1}

{0,2}

{3,3}

{3,0}

{3,1}

{3,2}

{2,3}

{2,0}

{2,1}

{2,2}

{1,3}

{1,0}

{1,1}

{1,2}

Figure 4. Spacing distribution P(S; 0, ξ1, η1) (equations (3.12)–(3.14)), for the indicated valuesof {ξ1, η1}.

Figure 4 shows the variety of spacing distributions that can be generated by different choicesof the parameters ξ 1 and η1. The distributions possess singularities at Sc = ξ 1/D1(ξ 1, η1), with

Sc ≈ ξ1/√

ξ 21 + η2

1 for large ξ 1 or η1. In addition, there are maxima close to S = Sc

√1 + η2

1

/ξ 2

1 ;these maxima get larger and narrower as ξ 1 or η1 increase, leading in the limit to the rigidspectrum distribution δ(S−1) (cf (3.10)).

Case D. |ξ | and/or |η| � 1

This corresponds to the two means being much greater than the variances (see figure 1). Thenwe expect the scaled spacings D (equation (2.5)) to be concentrated near ξ 2 + η2. In theintegral (2.9), the exponent is a rapidly varying function of θ , whose stationary point is closeto θ = arctan(η/ξ ). Straightforward but tedious algebra then leads to the spacing distribution

P(S;α, ξ, η) ≈ (ξ 2 + η2)√2π

(αξ 2 + η2

α

) exp

{− (ξ 2 + η2)2(S − 1)2

2(αξ 2 + η2

α

)}

if ξ 2 + η2 � 1.

(3.15)

This approximation is a Gaussian with rms width√(αξ 2 + η2

α

)(ξ 2 + η2)

, (3.16)

getting narrower as ξ 2 + η2 increases, again leading to the limiting rigid spectrum distributionδ(S−1).

7

J. Phys. A: Math. Theor. 42 (2009) 485102 M V Berry and P Shukla

4. Unfoldings

The normalization we have used—demanding that the mean spacing S = 1—looks differentfrom the usual unfolding procedure [13], in which S = �d(λ), where d(λ) is the meaneigenvalue density for eigenvalues near λ. But the usual procedure is intended to apply tolarge matrices, where d(λ) varies very slowly on the scale of the spacings �, so the unfolding isequivalent to the requirement S = 1. By contrast, in our 2 × 2 case, d(λ) varies considerablyon the scale �, so the usual procedure (which involves λ) is ambiguous. Moreover, it issuperflous, because we can impose S = 1 directly. Nevertheless, it is interesting to exploreunfolding, because some of the resulting distributions are unexpected.

The eigenvalues of the matrix (2.1) are

λ± = 12 (T ± R), (4.1)

where R ≡√

X2 + Y 2. Then the mean level density is

d(λ) =∫ ∞

−∞dT P (T )

∫ ∞

−∞dXP (X)

∫ ∞

−∞dYP (Y )

×[δ

(λ − 1

2(T − R)

)+ δ

(λ − 1

2(T + R)

)]. (4.2)

Noting that |λ − T/2| = �/2, the density can be expressed in terms of the spacing distribution(2.4) before scaling, smoothed by the distribution of the trace T:

d (λ) = 2∫ ∞

−∞d� P (T = � + 2λ) P (|�|) . (4.3)

The associated level counting function,

N (λ) =∫ λ

−∞dλ′d

(λ′), (4.4)

increases from 0 to 2 as λ increases from −∞ to +∞. A natural and convenient definition ofthe unfolded levels (though not the only possibility) is

ε± ≡ N (λ±) , (4.5)

giving the local unfolded spacing σ as a function of the spacing � before unfolding and themean eigenvalue position T/2:

σ (�; T ) = ε+ − ε− =∫ 1

2 (T +�)

12 (T −�)

dλd (λ). (4.6)

As � increases from −∞ to +∞ with T fixed, σ increases monotonically from 0 to 2.Therefore, the function σ can be inverted to give �(σ ; T), leading to the desired distributionof the local unfolded spacings, depending on T as well as σ :

P(σ ; T ) = P(�(σ ; T ))

∣∣∣∣d�(σ ; T )

∣∣∣∣= 2P(�(σ ; T ))

d(

12 (T + �(σ ; T ))

)+ d

(12 (T − �(σ ; T ))

) . (4.7)

This distribution vanishes outside the interval 0 < σ < 2.The case of traceless matrices (T = 0) is particularly interesting. The distribution in (4.3)

is P(T ) = δ(T ) and the level density is just the spacing distribution

d(λ) = 2P(� = 2|λ|), (4.8)

8

J. Phys. A: Math. Theor. 42 (2009) 485102 M V Berry and P Shukla

-2 -1 0 1 20.0

0.2

0.4

0.6

0.8

1.0

1.2d(λ)

λ

10.5

0.1

0

Figure 5. Level density (4.8) for the indicated values of the variance VT of the trace T in (2.1).

which vanishes at λ = 0 because of level repulsion. Then the denominator in (4.7) cancels thenumerator, and the spacing distribution is simply a constant:

P(σ ; T = 0) = {12 if 0 < σ < 2, 0 if σ > 2

}. (4.9)

This unexpected result holds for any traceless 2 × 2 matrices; the elements X and Y need notbe Gauss-distributed, and may be correlated.

For matrices that are not traceless, we can take the distribution of T to be Gaussian withvariance VT and zero mean (a non-zero mean is trivially accommodated by a shift in λ). ForX and Y it will suffice to consider the simplest case X = Y = 0, VX = VY = 1, that isξ = η = 0, α = 1 in the notation of section 2, for which the spacing distribution is just theWigner surmise (3.3). Then the level density (4.3) is

d(λ, VT ) = 2

√2VT

π

exp(−2λ2/VT )

(1 + VT )+

(1 + VT )3/2

√2λ

× exp

(− 2λ2

1 + VT

)Erf

√2

VT (1 + VT )

). (4.10)

Figure 5 illustrates how the density gets smoothed and broadened as VT increases fromzero.

The unfolded local spacings are, from (4.6),

σ(�; T , VT ) = Erf

(� − T√

2VT

)+ Erf

(� + T√

2VT

)

− 1√2

(Erf

(� − T√

2VT (1 + VT )

)exp

(− (� − T )2

2(1 + VT )

)

+ Erf

(� + T√

2VT (1 + VT )

)exp

(− (� + T )2

2 (1 + VT )

)). (4.11)

Figures 6–8 show the corresponding local unfolded spacing distributions (4.7) for differentvalues of the mean eigenvalue T/2 and variance VT. It is clear that for these 2 × 2 matrices

9

J. Phys. A: Math. Theor. 42 (2009) 485102 M V Berry and P Shukla

0.0 0.5 1.0 1.5 2.00.00.10.20.30.40.50.6

0.0 0.5 1.0 1.5 2.00.0

0.2

0.4

0.6

0.8

0.0 0.5 1.0 1.5 2.00.00.20.40.60.81.01.21.4

0.0 0.5 1.0 1.5 2.00.0

0.2

0.4

0.6

0.8

1.0

0.0 0.5 1.0 1.5 2.00.0

0.5

1.0

1.5

0.0 0.5 1.0 1.5 2.00.00.51.01.52.02.53.03.5

P(σ)

σ

(a) (b)

(c) (d )

(e) (f )

Figure 6. Unfolded local spacing distribution (4.7) for variance VT = 0.1 and (a) T = 0, (b) T =0.7, (c) T = 2, (d) T = 2.2, (e) T = 2.5, (f ) T = 3.

0.0 0.5 1.0 1.5 2.00.0

0.2

0.4

0.6

0.8

0.0 0.5 1.0 1.5 2.00.0

0.2

0.4

0.6

0.8

1.0

0.0 0.5 1.0 1.5 2.00.00.20.40.60.81.01.21.4

0.0 0.5 1.0 1.5 2.00.0

0.5

1.0

1.5

0.0 0.5 1.0 1.5 2.00.0

0.5

1.0

1.5

2.0

2.5

0.0 0.5 1.0 1.5 2.00

1

2

3

4

5

σ

P(σ)

(a) (b)

(c) (d )

(e) ( f )

Figure 7. As figure 6, for VT = 0.5 and (a) T = 0, (b) T = 1, (c) T = 2, (d) T = 2.5, (e) T = 3,(f ) T = 3.5.

unfolding has introduced unnecessary complication, even in this simplest case where the directaveraging gives the unique Wigner surmise, independently of the distribution of T.

10

J. Phys. A: Math. Theor. 42 (2009) 485102 M V Berry and P Shukla

0.0 0.5 1.0 1.5 2.00.0

0.2

0.4

0.6

0.8

0.0 0.5 1.0 1.5 2.00.0

0.2

0.4

0.6

0.8

1.0

1.2

0.0 0.5 1.0 1.5 2.00.0

0.5

1.0

1.5

2.0

0.0 0.5 1.0 1.5 2.00

2

4

6

8

P(ε)

ε

(a) (b)

(c) (d )

Figure 8. As figure 6, for VT = 1 and (a) T = 0, (b) T = 2, (c) T = 3, (d) T = 4.

The complication can be reduced by averaging the local spacing distributions (4.7) overT, giving

P(σ ;VT ) ≡∫ ∞

0dT P (σ ; T , V ) =

√2

πVT

∫ ∞

0dT

× �(σ ; T , VT ) exp(− 1

2

(T 2

VT+ �2(σ ; T , VT )

))[d(

12 (T + �(σ ; T , VT )), VT

)+ d

(12 (T − �(σ ; T , VT )), VT

)] . (4.12)

But, as figure 9 illustrates, the transition from the uniform distribution for VT = 0 as VT

increases still generates a variety of distributions. And for VT = 1 (figure 9(d)), for whichthe distribution of matrix elements in (2.1) is the GOE, the unfolded distribution still differssubstantially from the Wigner surmise that is such a good model for larger matrices.

It would be easy to evaluate (4.7) for matrices in the generalized ensembles considered insections 2 and 3. This would augment the parameter cube by the additional parameters T andVT and further complication as in the simplest case considered above.

5. Discussion

Of the six possible parameters labelling the general Gaussian statistics of 2 × 2 matrices—namely the means and variances of T, X and Y in (2.1)—only three—namely ξ , η and α,defined in (2.5)—are required to encompass all the spacing distributions. It is clear thatthis parameter family includes a rich variety of spacing distributions, in addition to thefamiliar stationary Wigner, Poisson and rigid distributions. Figures 2–4 illustrate this forthe special cases considered in section 3, corresponding to points on the boundary of theparameter cube. Computations for points in the interior of the cube show similar spacingdistributions. Particular single-parameter paths through the full parameter space have beenfound to be convenient models for wide classes of physical systems [26–29], though of coursethe properties of each ensemble are independent of whatever path is chosen to reach it.

11

J. Phys. A: Math. Theor. 42 (2009) 485102 M V Berry and P Shukla

0.0 0.5 1.0 1.5 2.00.0

0.2

0.4

0.6

0.8

0.0 0.5 1.0 1.5 2.00.0

0.2

0.4

0.6

0.8

0.0 0.5 1.0 1.5 2.00.0

0.2

0.4

0.6

0.8

0.0 0.5 1.0 1.5 2.00.0

0.2

0.4

0.6

0.8

ε

P(ε)

(a) (b)

(c) (d )

Figure 9. Average (4.10) of the local spacing distributions (4.7), for (a) VT = 0.001, (b) VT = 0.1,(c) VT = 0.5, (c) VT = 1. In (d) the dashed curve gives the Wigner surmise σ exp(− 1

2 σ 2).

Some of the P(S) pictures appear to exhibit stronger-than-linear level replusion, and thisis genuine in case C when ξ 1 > 0. Otherwise, the appearance of repulsion is illusory, forequation (2.9) shows that there is almost always linear repulsion, given precisely by

P(S;α, ξ, η) → SD(α, ξ, η)2 exp

(−

(ξ 2

2α+

η2α

2

))as S → 0. (5.1)

In some cases, the coefficient is very small, generating the illusion of much stronger repulsion.As discussed in section 4, the unfolding procedure employed to generate fluctuation

statistics in larger matrices, whose purpose is to ensure that the mean spacing is unity,is unnecessary in the 2 × 2 case where this normalization can be implemented directly.Moreover, as figures 6–9 illustrate, unfolding for 2 × 2 matrices generates a variety of spacingdistributions, even for the simplest case ξ = η = 0, VX = VY = 1; none of these distributionsreproduces the Wigner surmise (3.3).

The family of generalized ensembles considered here is the simplest. Furthergeneralization, to 2 × 2 complex Hermitian ensembles, or to non-Hermitian ensembles,is straightforward but of course involves more parameters.

Acknowledgments

We thank two referees for helpful suggestions. MVB’s research is supported by the LeverhulmeTrust.

References

[1] Gerrard A and Burch J M 1975 Introduction to Matrix Methods in Optics (London: Wiley)[2] Jones R C 1941 New calculus for the treatment of optical systems J. Opt. Soc. Am. 31 488–93[3] Berry M V 1984 Quantal phase factors accompanying adiabatic changes Proc. R. Soc. Lond. A 392 45–57

12

J. Phys. A: Math. Theor. 42 (2009) 485102 M V Berry and P Shukla

[4] Bender C M, Berry M V and Mandilara A 2002 Generalized PT symmetry and real spectra J. Phys. A: Math.Gen 35 L467–71

[5] Landau L 1932 Zur Theorie der Energieubertragung II Phys. Sov. Union 2 46–51[6] Majorana E 1932 Atomi orientati in campo magnetico variabile Nuovo Cimento 9 43–50[7] Zener C 1932 Non-adiabatic crossing of energy levels Proc. R. Soc. Lond. A 137 696–702[8] Berry M V 1995 Some Two-State Quantum Asymptotics in Fundamental Problems of Quantum Theory

ed D M Greenberger and A Zeilinger (Ann. N.Y. Acad. Sci. vol 255) pp 303–17[9] Furstenberg H 1963 Non-commuting random-matrix products Trans. Am. Math. Soc. 108 377–428

[10] Berry M V and Klein S 1997 Transparent mirrors: rays, waves and localization Eur. J. Phys. 18 222–8[11] Teller E 1937 The crossing of potential surfaces J. Phys. Chem. 41 109–16[12] Porter C E 1965 Statistical Theories of Spectra: Fluctuations (New York: Academic)[13] Haake F 1991 Quantum Signatures of Chaos (Heidelberg: Springer)[14] Sheleyansky D 1994 Coherent propagation of two-interacting particles in a random potential Phys. Rev. Lett.

73 2607–711[15] Fyodorov Y V and Mirlin A D 1991 Scaling properties of localization in random band matrices: a sigma-model

approach Phys. Rev. Lett 67 2405–09[16] Mirlin A D, Fyodorov Y V, Dittes F-M, Quezada J and Seligman T H 1996 Transition from localized to extended

eigenstates in the ensemble of power-law random banded matrices Phys. Rev. E 54 3221–30[17] Fyodorov J V, Chubykalo O A, Izraelev F M and Casati G 1996 Wigner random banded matrices with sparse

structure: local spectral density of states Phys. Rev. Lett. 76 1603–6[18] Timme M, Geisel T and Wolf F 2006 Speed of synchronization in complex networks of neural oscillators:

analytic results based on random matrix theory Chaos 6 015108-1–15[19] Luo F, Zhong J, Yang Y, Scheuermann R and Zhou J 2006 Application of random matrix theory to biological

networks Phys. Lett. A 357 420–3[20] Kwapien J, Drozdz S and Ioannides A A 2000 Cross-hemisphere correlation matrices for signals received by

auditory cortex of the brain Phys. Rev. E 62 5557–64[21] Plerou V, Gopikrishnan P, Ronenow B, Amaral L A N and Stanley H E 2000 Econophysics: financial time-series

from a statistical point of view Physica A 279 443–56[22] Santhanam M S and Patra P K 2001 Statistics of atmospheric correlations Phys. Rev. E 64 016102-1–7[23] Huu-Tai P C, Smirnova N A and Isacker P V 2002 Generalized Wigner surmise for (2 × 2) random matrices

J. Phys. A: Math. Gen 35 L199–206[24] Lenz G and Haake F 1991 Reliability of small matrices for large spectra with nonuniversal fluctuations Phys.

Rev. Lett. 67 1–4[25] Caurier E, Grammaticos B and Ramani A 1990 Level repulsion near integrability: a random-matrix analogy

J. Phys. A: Math. Gen. 23 4903–9[26] Shukla P 2000 Alternative techniques for complex spectra analysis Phys. Rev. E 62 2098–113[27] Shukla P 2007 Eigenfunction statistics of complex systems: a common mathematical formulation Phys. Rev.

E 75 051113-1–20[28] Dutta R and Shukla P 2007 Complex systems with half-integer spins: symplectic ensembles Phys. Rev.

E 76 051124-1–11[29] Shukla P 2008 Towards a common thread in complexity: an accuracy driven approach J. Phys. A: Math. Theor.

41 304023

13