non-equilibrium statistical mechanics: the physics of fluctuations and noise
Post on 05-Jan-2016
36 Views
Preview:
DESCRIPTION
TRANSCRIPT
Non-equilibrium Statistical Mechanics: The Physics of Fluctuations and Noise
SCI-B1-0910 Blok 1, 2009Aud M (NBI Blegdamsvej), kl 10-12 Mondays and Fridays
1
Non-equilibrium Statistical Mechanics: The Physics of Fluctuations and Noise
SCI-B1-0910 Blok 1, 2009Aud M (NBI Blegdamsvej), kl 10-12 Mondays and Fridays
John Hertz office: Kc-10 (NBI Blegdamsvej)email: hertz@nbi.dk tel. 3532 5236 (office Kbh), +46 8 5537 8808 (office Sth), 2055 1874 (mobil)http://www.nbi.dk/~hertz/noisecourse/coursepage.html
2
Source material“text”: N G van Kampen, Stochastic Processes in Physics and Chemistry (North –
Holland) [very clear, good on general formal methods, but little recent stuff. I will not follow it slavishly in the lectures, but I recommend you buy and read it.]
These two are good for anomalous diffusion:L Vlahos et al, Normal and Anomalous Diffusion: a Tutorial arXiv.org/abs/0805.0419R Metzler and J Klafter, The Random Walker’s Guide to Anomalous Diffusion, Physics
Reports 339, 1-77 (2000)On first-passage-time problems:S Redner, A Guide to First-Passage Problems (Cambridge U Press) [library reserve]On finance-theoretical applications:J-P Bouchaud and M Potters, Theory of Financial Risks: From Statistical Physics to Risk
Management (Cambridge U Press) [library reserve](and more to be mentioned as we go along)
3
Lecture 1: A random walk through the course
4
Lecture 1: A random walk through the courseThe ubiquity of noise (especially in biology): Changing conditions does not changestates; rather, it changes the relative probabilities of states.
5
Lecture 1: A random walk through the courseThe ubiquity of noise (especially in biology): Changing conditions does not changestates; rather, it changes the relative probabilities of states.
Example: protein conformational changePotential energy:
6
Lecture 1: A random walk through the courseThe ubiquity of noise (especially in biology): Changing conditions does not changestates; rather, it changes the relative probabilities of states.
Example: protein conformational changePotential energy:
V1(x)
x 7
Lecture 1: A random walk through the courseThe ubiquity of noise (especially in biology): Changing conditions does not changestates; rather, it changes the relative probabilities of states.
Example: protein conformational changePotential energy:
V1(x)
x 8
Lecture 1: A random walk through the courseThe ubiquity of noise (especially in biology): Changing conditions does not changestates; rather, it changes the relative probabilities of states.
Example: protein conformational changePotential energy:
V1(x) V2(x)
x 9
Lecture 1: A random walk through the courseThe ubiquity of noise (especially in biology): Changing conditions does not changestates; rather, it changes the relative probabilities of states.
Example: protein conformational changePotential energy:
V1(x) V2(x)
x 10
Lecture 1: A random walk through the courseThe ubiquity of noise (especially in biology): Changing conditions does not changestates; rather, it changes the relative probabilities of states.
Example: protein conformational changePotential energy:
The real story:
P1(x)
V1(x) P2(x) V2(x)
x 11
€
P1,2(x)∝ exp[−βV1,2(x)]
From (equilibrium) stat mech:
This course: how P(x) changes from P1(x) to P2(x):
12
€
P1,2(x)∝ exp[−βV1,2(x)]
From (equilibrium) stat mech:
This course: how P(x) changes from P1(x) to P2(x): Dynamics of P(x,t)
www.nbi.dk/hertz/noisecourse/demos/Pseq.matwww.nbi.dk/hertz/noisecourse/demos/runseq.m
€
dP(x, t)
dt=L
13
€
P1,2(x)∝ exp[−βV1,2(x)]
From (equilibrium) stat mech:
This course: how P(x) changes from P1(x) to P2(x): Dynamics of P(x,t)
www.nbi.dk/hertz/noisecourse/demos/Pseq.matwww.nbi.dk/hertz/noisecourse/demos/runseq.m
or
etc.
€
dP(x, t)
dt=L
€
d x
dt=L
d x(t1)x(t)
dt=L
14
Random walkswww.nbi.dk/~hertz/noisecourse/demos/brown.m
15
Random walkswww.nbi.dk/~hertz/noisecourse/demos/brown.m
16
Random walks
€
XN = x i ;i=1
N
∑
x i = 0; x i
2= a2; x i ⋅x j = 0
17
Random walks
independent steps
€
XN = x i ;i=1
N
∑
x i = 0; x i
2= a2; x i ⋅x j = 0
18
Random walks
independent steps
€
XN = x i ;i=1
N
∑
x i = 0; x i
2= a2; x i ⋅x j = 0
€
XN ⋅XN = x i ⋅x i
i=1
N
∑ + x i ⋅x j
i≠ j
N
∑
= Na2
19
Random walks
independent steps
i.e, rms distance
€
XN = x i ;i=1
N
∑
x i = 0; x i
2= a2; x i ⋅x j = 0
€
XN ⋅XN = x i ⋅x i
i=1
N
∑ + x i ⋅x j
i≠ j
N
∑
= Na2
XN
2= N a
20
Random walks and diffusion
21
Step length distribution χ(y)
€
dy χ (y) =1;∫ dy yχ (y) = 0;∫ dy y 2χ (y) = a2∫
Random walks and diffusion
22
Step length distribution χ(y)
Change in P from one step:
€
dy χ (y) =1;∫ dy yχ (y) = 0;∫ dy y 2χ (y) = a2∫
€
P(x, t + Δt) = dy χ (y)P(x − y, t)∫
Random walks and diffusion
23
Step length distribution χ(y)
Change in P from one step:
P
€
dy χ (y) =1;∫ dy yχ (y) = 0;∫ dy y 2χ (y) = a2∫
€
P(x, t + Δt) = dy χ (y)P(x − y, t)∫
P χ
Random walks and diffusion
24
Step length distribution χ(y)
Change in P from one step:
P
€
dy χ (y) =1;∫ dy yχ (y) = 0;∫ dy y 2χ (y) = a2∫
€
P(x, t + Δt) = dy χ (y)P(x − y, t)∫
= dy χ (y) P(x, t) + y∂P
∂x+
1
2y 2 ∂ 2P
∂x 2+L
⎡
⎣ ⎢
⎤
⎦ ⎥∫P χ
Random walks and diffusion
25
Step length distribution χ(y)
Change in P from one step:
P
€
dy χ (y) =1;∫ dy yχ (y) = 0;∫ dy y 2χ (y) = a2∫
€
P(x, t + Δt) = dy χ (y)P(x − y, t)∫
= dy χ (y) P(x, t) + y∂P
∂x+
1
2y 2 ∂ 2P
∂x 2+L
⎡
⎣ ⎢
⎤
⎦ ⎥∫
= P(x, t) +1
2a2 ∂ 2P
∂x 2
P χ
Random walks and diffusion
26
Step length distribution χ(y)
Change in P from one step:
P
Diffusion equation:
€
dy χ (y) =1;∫ dy yχ (y) = 0;∫ dy y 2χ (y) = a2∫
€
P(x, t + Δt) = dy χ (y)P(x − y, t)∫
= dy χ (y) P(x, t) + y∂P
∂x+
1
2y 2 ∂ 2P
∂x 2+L
⎡
⎣ ⎢
⎤
⎦ ⎥∫
= P(x, t) +1
2a2 ∂ 2P
∂x 2
€
∂P
∂t= D
∂ 2P
∂x 2D =
a2
2Δt
P χ
Random walks and diffusionStep length distribution χ(y)
Change in P from one step:
P
Diffusion equation:
diffusion constant
€
dy χ (y) =1;∫ dy yχ (y) = 0;∫ dy y 2χ (y) = a2∫
€
P(x, t + Δt) = dy χ (y)P(x − y, t)∫
= dy χ (y) P(x, t) + y∂P
∂x+
1
2y 2 ∂ 2P
∂x 2+L
⎡
⎣ ⎢
⎤
⎦ ⎥∫
= P(x, t) +1
2a2 ∂ 2P
∂x 2
€
∂P
∂t= D
∂ 2P
∂x 2D =
a2
2Δt
P χ
Solution of the diffusion equation:
28
Solution of the diffusion equation:
29
€
P(x, t) =1
4πDtexp −
x 2
4Dt
⎛
⎝ ⎜
⎞
⎠ ⎟
Solution of the diffusion equation:
30
€
P(x, t) =1
4πDtexp −
x 2
4Dt
⎛
⎝ ⎜
⎞
⎠ ⎟
Gaussian, spreading with time, variance 2Dt
http;//www.nbi.dk/~hertz/noisecourse/gaussspread.m
Solution of the diffusion equation:
31
€
P(x, t) =1
4πDtexp −
x 2
4Dt
⎛
⎝ ⎜
⎞
⎠ ⎟
Gaussian, spreading with time, variance 2Dt
http;//www.nbi.dk/~hertz/noisecourse/gaussspread.m
Distribution obtained by simulating 20000 random walks:
32
Anomalous diffusion
Normal diffusion:
€
x 2 = 2Dt
Anomalous diffusion
Normal diffusion:
An experimental counterexample: €
x 2 = 2Dt
Anomalous diffusion
Motion of lipid granules in yeast cells
Tolic-Nørrelykke et al, Phys Rev Lett 93, 078102 (2004)
Normal diffusion:
An experimental counterexample: €
x 2 = 2Dt
Anomalous diffusion
Motion of lipid granules in yeast cells
Tolic-Nørrelykke et al, Phys Rev Lett 93, 078102 (2004)
Normal diffusion:
An experimental counterexample: €
x 2 = 2Dt
€
x 2 ∝ t 0.75
Sub- and superdiffusion
37
€
x 2 ∝ t H
Sub- and superdiffusion
38
€
x 2 ∝ t HH: Hurst exponent
H < 1: subdiffusionH > 1: superdiffusion
Sub- and superdiffusion
39
€
x 2 ∝ t HH: Hurst exponent
H < 1: subdiffusionH > 1: superdiffusion
One way to get superdiffusion: long-time correlations between steps
Sub- and superdiffusion
40
€
x 2 ∝ t HH: Hurst exponent
H < 1: subdiffusionH > 1: superdiffusion
One way to get superdiffusion: long-time correlations between steps
One way to get subdiffusion: long-time anti-correlations between steps
Levy walks
41
Step length distribution χ(y):
_______________
€
dy χ (y) =1;∫ dy yχ (y) = 0;∫ dy y 2χ (y) = ∞∫
Levy walks
42
Step length distribution χ(y):
_______________
€
dy χ (y) =1;∫ dy yχ (y) = 0;∫ dy y 2χ (y) = ∞∫
power law tail in step length distribution:
€
χ(y)∝ y−a, a ≤ 3
Levy walks
43
Step length distribution χ(y):
_______________
€
dy χ (y) =1;∫ dy yχ (y) = 0;∫ dy y 2χ (y) = ∞∫
power law tail in step length distribution:
Example: Cauchy (Lorentz) distribution€
χ(y)∝ y−a, a ≤ 3
€
χ(y) =1/π
1+ y 2
Levy walks
44
Step length distribution χ(y):
_______________
€
dy χ (y) =1;∫ dy yχ (y) = 0;∫ dy y 2χ (y) = ∞∫
power law tail in step length distribution:
Example: Cauchy (Lorentz) distribution
http://www.nbi.dk/~hertz/noisecourse/levy.m (a = 5/2)
€
χ(y)∝ y−a, a ≤ 3
€
χ(y) =1/π
1+ y 2
Levy walks
45
Step length distribution χ(y):
_______________
€
dy χ (y) =1;∫ dy yχ (y) = 0;∫ dy y 2χ (y) = ∞∫
power law tail in step length distribution:
Example: Cauchy (Lorentz) distribution
http://www.nbi.dk/~hertz/noisecourse/levy.m (a = 5/2)
€
χ(y)∝ y−a, a ≤ 3
€
χ(y) =1/π
1+ y 2
Levy walks
46
Step length distribution χ(y):
_______________
€
dy χ (y) =1;∫ dy yχ (y) = 0;∫ dy y 2χ (y) = ∞∫
power law tail in step length distribution:
Example: Cauchy (Lorentz) distribution
http://www.nbi.dk/~hertz/noisecourse/levy.m (a = 5/2)
Note: <x2> = ∞ for all t
€
χ(y)∝ y−a, a ≤ 3
€
χ(y) =1/π
1+ y 2
Brown vs Levy
47
Ising model
48
(an example of a system with many degrees of freedom)
Ising model
49
(an example of a system with many degrees of freedom)
Binary “spins” Si(t) = ±1
Ising model
50
(an example of a system with many degrees of freedom)
Binary “spins” Si(t) = ±1
Dynamics: at every time step,
Ising model
51
(an example of a system with many degrees of freedom)
Binary “spins” Si(t) = ±1
Dynamics: at every time step, (1) choose a spin (i) at random
Ising model
52
(an example of a system with many degrees of freedom)
Binary “spins” Si(t) = ±1
Dynamics: at every time step, (1) choose a spin (i) at random(2) compute “field” of neighbors hi(t) = ΣjJijSj(t)
Ising model
53
(an example of a system with many degrees of freedom)
Binary “spins” Si(t) = ±1
Dynamics: at every time step, (1) choose a spin (i) at random(2) compute “field” of neighbors hi(t) = ΣjJijSj(t)(3) Si(t + Δt) = +1 with probability
€
P(h) =1
1+ exp(−2hi)
Ising model
54
(an example of a system with many degrees of freedom)
Binary “spins” Si(t) = ±1
Dynamics: at every time step, (1) choose a spin (i) at random(2) compute “field” of neighbors hi(t) = ΣjJijSj(t)(3) Si(t + Δt) = +1 with probability
€
P(h) =1
1+ exp(−2hi)
Changing the interaction strength:
55
http://www.nbi.dk/~hertz/noisecourse/ising.m
Changing the interaction strength:
56
http://www.nbi.dk/~hertz/noisecourse/ising.m
Varying the interaction strength: Jneighbors = 0.25:
Changing the interaction strength:
57
http://www.nbi.dk/~hertz/noisecourse/ising.m
Varying the interaction strength: Jneighbors = 0.25, 0.45:
Changing the interaction strength:
58
http://www.nbi.dk/~hertz/noisecourse/ising.m
Varying the interaction strength: Jneighbors = 0.25, 0.45, 0.65:
Some basic concepts in probability theory
59
Random variable x
Some basic concepts in probability theory
60
Random variable x
Probability distribution (“density”) P(x)
Some basic concepts in probability theory
61
Random variable x
Probability distribution (“density”) P(x)If x is discrete-valued: P(xn) or Pn
Some basic concepts in probability theory
62
Random variable x
Probability distribution (“density”) P(x)If x is discrete-valued: P(xn) or Pn
Normalization:
€
P(x)dx =1 P(xn ) =1n
∑∫
Some basic concepts in probability theory
63
Random variable x
Probability distribution (“density”) P(x)If x is discrete-valued: P(xn) or Pn
Normalization:
Averages:
€
P(x)dx =1 P(xn ) =1n
∑∫
€
A(x) ≡ A(x)P(x)dx∫
Some basic concepts in probability theory
64
Random variable x
Probability distribution (“density”) P(x)If x is discrete-valued: P(xn) or Pn
Normalization:
Averages:
Moments:
€
P(x)dx =1 P(xn ) =1n
∑∫
€
A(x) ≡ A(x)P(x)dx∫
€
x n = x nP(x)dx∫
Some basic concepts in probability theory
65
Random variable x
Probability distribution (“density”) P(x)If x is discrete-valued: P(xn) or Pn
Normalization:
Averages:
Moments:
Mean:
€
P(x)dx =1 P(xn ) =1n
∑∫
€
A(x) ≡ A(x)P(x)dx∫
€
x n = x nP(x)dx∫
€
x1
Some common distributions
66
Gaussian (normal):
€
P(x) =1
2πe− 1
2 x 2
Some common distributions
67
Gaussian (normal):
Cauchy (Lorentzian):
€
P(x) =1
2πe− 1
2 x 2
€
P(x) =1
π
1
1+ x 2
Some common distributions
68
Gaussian (normal):
Cauchy (Lorentzian):
(one-sided) exponential:
€
P(x) =1
2πe− 1
2 x 2
€
P(x) =1
π
1
1+ x 2
€
P(x) = Θ(x)e−x
Some common distributions
69
Gaussian (normal):
Cauchy (Lorentzian):
(one-sided) exponential:
Levy:
€
P(x) =1
2πe− 1
2 x 2
€
P(x) =1
π
1
1+ x 2
€
P(x) = Θ(x)e−x
€
P(x) =Θ(x)
2π
1
x 3 / 2exp −
1
2x
⎛
⎝ ⎜
⎞
⎠ ⎟
Some common distributions
70
Gaussian (normal):
Cauchy (Lorentzian):
(one-sided) exponential:
Levy:
Poisson:
€
P(x) =1
2πe− 1
2 x 2
€
P(x) =1
π
1
1+ x 2
€
P(x) = Θ(x)e−x
€
P(x) =Θ(x)
2π
1
x 3 / 2exp −
1
2x
⎛
⎝ ⎜
⎞
⎠ ⎟
€
Pn =an
n!e−a
Thin and fat tails
71
Characteristic functions
72
Characteristic functions
73
€
G(k) ≡ exp ikx( ) = e ikxP(x)dx∫
Characteristic functions
74
€
G(k) ≡ exp ikx( ) = e ikxP(x)dx∫
G(k) is the moment-generating function (expand the exponential):
Characteristic functions
75
€
G(k) ≡ exp ikx( ) = e ikxP(x)dx∫
G(k) is the moment-generating function (expand the exponential):
€
G(k) =(ik)n
n!n
∑ x n
Characteristic functions
76
€
G(k) ≡ exp ikx( ) = e ikxP(x)dx∫
G(k) is the moment-generating function (expand the exponential):
€
G(k) =(ik)n
n!n
∑ x n
Note: G(0) = 1 (normalization)
Characteristic functions
77
€
G(k) ≡ exp ikx( ) = e ikxP(x)dx∫
G(k) is the moment-generating function (expand the exponential):
€
G(k) =(ik)n
n!n
∑ x n
Note: G(0) = 1 (normalization)
Gaussian: Cauchy:
Exponential: Levy:
€
G(k) = e− 12 k 2
G(k) = e− k
G(k) =1
1− ikG(k) = e− −2ik
Cumulants
Expanding log G generates the cumulants κn:
Cumulants
Expanding log G generates the cumulants κn:
€
logG(k) ≡(ik)n
n!n=1
∞
∑ κ n
Cumulants
Expanding log G generates the cumulants κn:
€
logG(k) ≡(ik)n
n!n=1
∞
∑ κ n
€
κ1 = x
κ 2 = x 2 − x2
= x − x( )2
≡ σ 2
κ 3 = x 3 − 3 x 2 x + 2 x3
κ 4 = x 4 − 4 x 3 x − 3 x 2 2+12 x 2 x
2− 6 x
4
Cumulants
Expanding log G generates the cumulants κn:
€
logG(k) ≡(ik)n
n!n=1
∞
∑ κ n
€
κ1 = x
κ 2 = x 2 − x2
= x − x( )2
≡ σ 2
κ 3 = x 3 − 3 x 2 x + 2 x3
κ 4 = x 4 − 4 x 3 x − 3 x 2 2+12 x 2 x
2− 6 x
4
(mean)
Cumulants
Expanding log G generates the cumulants κn:
€
logG(k) ≡(ik)n
n!n=1
∞
∑ κ n
€
κ1 = x
κ 2 = x 2 − x2
= x − x( )2
≡ σ 2
κ 3 = x 3 − 3 x 2 x + 2 x3
κ 4 = x 4 − 4 x 3 x − 3 x 2 2+12 x 2 x
2− 6 x
4
(mean) variance
Cumulants
Expanding log G generates the cumulants κn:
€
logG(k) ≡(ik)n
n!n=1
∞
∑ κ n
€
κ1 = x
κ 2 = x 2 − x2
= x − x( )2
≡ σ 2
κ 3 = x 3 − 3 x 2 x + 2 x3
κ 4 = x 4 − 4 x 3 x − 3 x 2 2+12 x 2 x
2− 6 x
4
(mean) variance
skewness: γ3 = κ3/(κ2)3/2 kurtosis: κ4/(κ2)2
Multivariate distributions
84
€
P(x1,L ,xn ), P(x)
Multivariate distributions
85
€
P(x1,L ,xn ), P(x)
€
P(x1) = P(x1,L ,xn )dx2∫ L dxnmarginal distribution of x1:
Multivariate distributions
86
€
P(x1,L ,xn ), P(x)
€
P(x1) = P(x1,L ,xn )dx2∫ L dxn
P(x1,x2) = P(x1,L ,xn )dx3∫ L dxn
marginal distribution of x1:
etc.
Multivariate distributions
87
€
P(x1,L ,xn ), P(x)
€
P(x1) = P(x1,L ,xn )dx2∫ L dxn
P(x1,x2) = P(x1,L ,xn )dx3∫ L dxn
marginal distribution of x1:
etc.
Independence:
€
P(x,y) = P(x)P(y)
Multivariate distributions
88
€
P(x1,L ,xn ), P(x)
€
P(x1) = P(x1,L ,xn )dx2∫ L dxn
P(x1,x2) = P(x1,L ,xn )dx3∫ L dxn
marginal distribution of x1:
etc.
Independence:
Conditional probabilities:
€
P(y | x) : P(x,y) = P(x | y)P(y) = P(y | x)P(x)€
P(x,y) = P(x)P(y)
Multivariate distributions
89
€
P(x1,L ,xn ), P(x)
€
P(x1) = P(x1,L ,xn )dx2∫ L dxn
P(x1,x2) = P(x1,L ,xn )dx3∫ L dxn
marginal distribution of x1:
etc.
Independence:
Conditional probabilities:
Bayes’s (Bayes) rule:€
P(y | x) : P(x,y) = P(x | y)P(y) = P(y | x)P(x)€
P(x,y) = P(x)P(y)
€
P(y | x) =P(x | y)P(y)
P(x)=
P(x,y)
P(x)
Adding random variables
90
€
x : P1(x); y : P2(y)
Adding random variables
91
€
x : P1(x); y : P2(y)
€
z = x + y :
Adding random variables
92
€
x : P1(x); y : P2(y)
€
z = x + y :
€
P(z) = δ(z − x − y)P1(x)P2(y)dxdy∫
Adding random variables
93
€
x : P1(x); y : P2(y)
€
z = x + y :
€
P(z) = δ(z − x − y)P1(x)P2(y)dxdy∫= P1(x)P2(z − x)dx∫
Adding random variables
94
€
x : P1(x); y : P2(y)
€
z = x + y :
€
P(z) = δ(z − x − y)P1(x)P2(y)dxdy∫= P1(x)P2(z − x)dx∫
characteristic functions:
€
G(k) = G1(k)G2(k)
Change of variables:
€
x : Px (x)
y = f (x)
Change of variables:
€
x : Px (x)
y = f (x)
Py (y) = Px∫ (x)δ[y − f (x)]dx
Change of variables:
€
x : Px (x)
y = f (x)
Py (y) = Px∫ (x)δ[y − f (x)]dx
= Px∫ (x)δ[y − f (x)]dx
dydy
Change of variables:
€
x : Px (x)
y = f (x)
Py (y) = Px∫ (x)δ[y − f (x)]dx
= Px∫ (x)δ[y − f (x)]dx
dydy
= Px ( f −1(y))df −1(y)
dy
Change of variables:
€
x : Px (x)
y = f (x)
Py (y) = Px∫ (x)δ[y − f (x)]dx
= Px∫ (x)δ[y − f (x)]dx
dydy
= Px ( f −1(y))df −1(y)
dy(or use Py(y)dy = Px(x)dx)
Change of variables:
€
x : Px (x)
y = f (x)
Py (y) = Px∫ (x)δ[y − f (x)]dx
= Px∫ (x)δ[y − f (x)]dx
dydy
= Px ( f −1(y))df −1(y)
dy
Multivariate case:
€
Py (y) = Px (f−1(y))
∂f−1(y)
∂y
(or use Py(y)dy = Px(x)dx)
Change of variables:
€
x : Px (x)
y = f (x)
Py (y) = Px∫ (x)δ[y − f (x)]dx
= Px∫ (x)δ[y − f (x)]dx
dydy
= Px ( f −1(y))df −1(y)
dy
Multivariate case:
€
Py (y) = Px (f−1(y))
∂f−1(y)
∂y
inverse of Jacobian J
€
Jij =∂y i
∂x j
(or use Py(y)dy = Px(x)dx)
Gaussian (normal) distribution
102
€
P(x) =1
2πσ 2exp − 1
2 (x − μ)2 /σ 2( )
Gaussian (normal) distribution
103
€
P(x) =1
2πσ 2exp − 1
2 (x − μ)2 /σ 2( )
characteristic function:
€
G(k) = exp ikμ − 12 k 2σ 2
( )
Gaussian (normal) distribution
104
€
P(x) =1
2πσ 2exp − 1
2 (x − μ)2 /σ 2( )
characteristic function:
cumulants:
€
G(k) = exp ikμ − 12 k 2σ 2
( )
€
κ1 = μ
κ 2 = σ 2
κ m = 0; m > 2
Gaussian (normal) distribution
105
€
P(x) =1
2πσ 2exp − 1
2 (x − μ)2 /σ 2( )
characteristic function:
cumulants:
moments (μ = 0 case):
€
G(k) = exp ikμ − 12 k 2σ 2
( )
€
κ1 = μ
κ 2 = σ 2
κ m = 0; m > 2
€
x 2n = (2n −1)!! x 2 ≡ (2n −1)(2n − 3)L 3⋅1 x 2
Multivariate Gaussian
106
correlation matrix
€
C jk ≡ x j − x j( ) xk − xk( )
Multivariate Gaussian
107
correlation matrix
€
C jk ≡ x j − x j( ) xk − xk( )
€
P(x) =1
(2π )d / 2 detCexp −
1
2x j − x j( ) C
−1( )
jkxk − xk( )
jk
∑ ⎡
⎣ ⎢ ⎢
⎤
⎦ ⎥ ⎥
Multivariate Gaussian
108
correlation matrix
€
C jk ≡ x j − x j( ) xk − xk( )
€
P(x) =1
(2π )d / 2 detCexp −
1
2x j − x j( ) C
−1( )
jkxk − xk( )
jk
∑ ⎡
⎣ ⎢ ⎢
⎤
⎦ ⎥ ⎥
characteristic function:
€
G(k) = exp ik j x j − 12
j
∑ k jC jkkk
jk
∑ ⎛
⎝ ⎜ ⎜
⎞
⎠ ⎟ ⎟
Multivariate Gaussian
109
correlation matrix
€
C jk ≡ x j − x j( ) xk − xk( )
€
P(x) =1
(2π )d / 2 detCexp −
1
2x j − x j( ) C
−1( )
jkxk − xk( )
jk
∑ ⎡
⎣ ⎢ ⎢
⎤
⎦ ⎥ ⎥
characteristic function:
€
G(k) = exp ik j x j − 12
j
∑ k jC jkkk
jk
∑ ⎛
⎝ ⎜ ⎜
⎞
⎠ ⎟ ⎟
higher moments (Wick’s theorem):
Multivariate Gaussian
110
correlation matrix
€
C jk ≡ x j − x j( ) xk − xk( )
€
P(x) =1
(2π )d / 2 detCexp −
1
2x j − x j( ) C
−1( )
jkxk − xk( )
jk
∑ ⎡
⎣ ⎢ ⎢
⎤
⎦ ⎥ ⎥
characteristic function:
€
G(k) = exp ik j x j − 12
j
∑ k jC jkkk
jk
∑ ⎛
⎝ ⎜ ⎜
⎞
⎠ ⎟ ⎟
higher moments (Wick’s theorem):
€
x1x2x3x4 = x1x2 x3x4 + x1x3 x2x4 + x1x4 x2x3
Multivariate Gaussian
111
correlation matrix
€
C jk ≡ x j − x j( ) xk − xk( )
€
P(x) =1
(2π )d / 2 detCexp −
1
2x j − x j( ) C
−1( )
jkxk − xk( )
jk
∑ ⎡
⎣ ⎢ ⎢
⎤
⎦ ⎥ ⎥
characteristic function:
€
G(k) = exp ik j x j − 12
j
∑ k jC jkkk
jk
∑ ⎛
⎝ ⎜ ⎜
⎞
⎠ ⎟ ⎟
higher moments (Wick’s theorem):
(sum of all pairwise contractions)
€
x1x2x3x4 = x1x2 x3x4 + x1x3 x2x4 + x1x4 x2x3
Multivariate Gaussian
112
correlation matrix
€
C jk ≡ x j − x j( ) xk − xk( )
€
P(x) =1
(2π )d / 2 detCexp −
1
2x j − x j( ) C
−1( )
jkxk − xk( )
jk
∑ ⎡
⎣ ⎢ ⎢
⎤
⎦ ⎥ ⎥
characteristic function:
€
G(k) = exp ik j x j − 12
j
∑ k jC jkkk
jk
∑ ⎛
⎝ ⎜ ⎜
⎞
⎠ ⎟ ⎟
higher moments (Wick’s theorem):
(sum of all pairwise contractions)etc. for higher orders
€
x1x2x3x4 = x1x2 x3x4 + x1x3 x2x4 + x1x4 x2x3
Central limit theorem
sum of N iid random variables
€
y =1
Nx i
i=1
N
∑
Central limit theorem
sum of N iid random variables
distribution of xi:
€
y =1
Nx i
i=1
N
∑
p(x i)
Central limit theorem
sum of N iid random variables
distribution of xi:
€
y =1
Nx i
i=1
N
∑
p(x i) assume finite variance
Central limit theorem
sum of N iid random variables
distribution of xi:
characteristic function
€
y =1
Nx i
i=1
N
∑
p(x i)
g(k) = exp − 12 k 2σ 2 − i
3! k 3κ 3 + 14! k 4κ 4 +L( )
assume finite variance
Central limit theorem
sum of N iid random variables
distribution of xi:
characteristic function
characteristic function of y:
€
y =1
Nx i
i=1
N
∑
p(x i)
g(k) = exp − 12 k 2σ 2 − i
3! k 3κ 3 + 14! k 4κ 4 +L( )
G(y) = g k N( )[ ]N
assume finite variance
Central limit theorem
sum of N iid random variables
distribution of xi:
characteristic function
characteristic function of y:
€
y =1
Nx i
i=1
N
∑
p(x i)
g(k) = exp − 12 k 2σ 2 − i
3! k 3κ 3 + 14! k 4κ 4 +L( )
G(y) = g k N( )[ ]N
logG(y) = N logg k N( )
assume finite variance
Central limit theorem
sum of N iid random variables
distribution of xi:
characteristic function
characteristic function of y:
€
y =1
Nx i
i=1
N
∑
p(x i)
g(k) = exp − 12 k 2σ 2 − i
3! k 3κ 3 + 14! k 4κ 4 +L( )
G(y) = g k N( )[ ]N
logG(y) = N logg k N( )
= N −k 2
2Nσ 2 −
ik 3
6N 3 / 2κ 3 +
k 4
24N 2κ 4 +L
⎛
⎝ ⎜
⎞
⎠ ⎟
assume finite variance
Central limit theorem
sum of N iid random variables
distribution of xi:
characteristic function
characteristic function of y:
€
y =1
Nx i
i=1
N
∑
p(x i)
g(k) = exp − 12 k 2σ 2 − i
3! k 3κ 3 + 14! k 4κ 4 +L( )
G(y) = g k N( )[ ]N
logG(y) = N logg k N( )
= N −k 2
2Nσ 2 −
ik 3
6N 3 / 2κ 3 +
k 4
24N 2κ 4 +L
⎛
⎝ ⎜
⎞
⎠ ⎟
N →∞ ⏐ → ⏐ ⏐ − 1
2 k 2σ 2
assume finite variance
Central limit theorem
sum of N iid random variables
distribution of xi:
characteristic function
characteristic function of y:
€
y =1
Nx i
i=1
N
∑
p(x i)
g(k) = exp − 12 k 2σ 2 − i
3! k 3κ 3 + 14! k 4κ 4 +L( )
G(y) = g k N( )[ ]N
logG(y) = N logg k N( )
= N −k 2
2Nσ 2 −
ik 3
6N 3 / 2κ 3 +
k 4
24N 2κ 4 +L
⎛
⎝ ⎜
⎞
⎠ ⎟
N →∞ ⏐ → ⏐ ⏐ − 1
2 k 2σ 2
y is Gaussian!
assume finite variance
Stable distributions
122
N-1/2 x sum of N Gaussian variables has same distribution as the originalvariables: stable
Stable distributions
123
N-1/2 x sum of N Gaussian variables has same distribution as the originalvariables: stable
Are there distributions which are stable but with a different scaling factor N-1/α (α ≠ 2) instead?
Stable distributions
124
N-1/2 x sum of N Gaussian variables has same distribution as the originalvariables: stable
Are there distributions which are stable but with a different scaling factor N-1/α (α ≠ 2) instead?
Require
€
g k N1/α( )[ ]
N= g(k)
Stable distributions
125
N-1/2 x sum of N Gaussian variables has same distribution as the originalvariables: stable
Are there distributions which are stable but with a different scaling factor N-1/α (α ≠ 2) instead?
Require
€
g k N1/α( )[ ]
N= g(k)
N logg k N1/α( ) = logg(k)
Stable distributions
126
N-1/2 x sum of N Gaussian variables has same distribution as the originalvariables: stable
Are there distributions which are stable but with a different scaling factor N-1/α (α ≠ 2) instead?
Require
Solution:
€
g k N1/α( )[ ]
N= g(k)
N logg k N1/α( ) = logg(k)
logg(k) = −ckα
Stable distributions
127
N-1/2 x sum of N Gaussian variables has same distribution as the originalvariables: stable
Are there distributions which are stable but with a different scaling factor N-1/α (α ≠ 2) instead?
Require
Solution:
or
€
g k N1/α( )[ ]
N= g(k)
N logg k N1/α( ) = logg(k)
logg(k) = −ckα
g(k) = exp −ckα( ) (α ≤ 2)
Stable distributions
128
N-1/2 x sum of N Gaussian variables has same distribution as the originalvariables: stable
Are there distributions which are stable but with a different scaling factor N-1/α (α ≠ 2) instead?
Require
Solution:
or
characteristic function for stable distribution of order α
€
g k N1/α( )[ ]
N= g(k)
N logg k N1/α( ) = logg(k)
logg(k) = −ckα
g(k) = exp −ckα( ) (α ≤ 2)
Stable distributions
129
€
Pα (x) = gα (k)e−ikx∫ dk
2π= exp −ikx − ckα
( )∫ dk
2π
Stable distribution of order α:
Stable distributions
130
€
Pα (x) = gα (k)e−ikx∫ dk
2π= exp −ikx − ckα
( )∫ dk
2π
Stable distribution of order α:
Asymptotic behaviour for large x: P(x) ~ 1/x1+α
Stable distributions
131
€
Pα (x) = gα (k)e−ikx∫ dk
2π= exp −ikx − ckα
( )∫ dk
2π
Stable distribution of order α:
Asymptotic behaviour for large x: P(x) ~ 1/x1+α
Note: Stable distributions have infinite variance for α < 2
Stable distributions
132
€
Pα (x) = gα (k)e−ikx∫ dk
2π= exp −ikx − ckα
( )∫ dk
2π
Stable distribution of order α:
Asymptotic behaviour for large x: P(x) ~ 1/x1+α
Note: Stable distributions have infinite variance for α < 2Stable distributions have infinite mean for α < 1
Stable distributions
133
€
Pα (x) = gα (k)e−ikx∫ dk
2π= exp −ikx − ckα
( )∫ dk
2π
Stable distribution of order α:
Asymptotic behaviour for large x: P(x) ~ 1/x1+α
Note: Stable distributions have infinite variance for α < 2Stable distributions have infinite mean for α < 1
For symmetric distributions, use
€
gα (k) = exp −c kα
( ); Pα (x) = exp −ikx − c kα
( )∫ dk
2π
Stable distributions: examples
134
Special cases (can do the Fourier inversion analytically):
α = 1/2: Levy
α = 1: Cauchy/Lorentzian
α = 3/2: Holtsmark
α = 2: Gaussian
top related