l a tti c e q u a n tu m f i e l d t h e o r y 3 r d c a mp u s o n n u … · 2020. 8. 19. ·...

45
3rd Campus on Numerical Lattice Quantum Field Theory Numerical Integration and Harmonic Oscillator Xiaonu Xiong 1

Upload: others

Post on 10-Feb-2021

0 views

Category:

Documents


0 download

TRANSCRIPT

  • 3rd Campus on Numerical

    Lattice Quantum Field Theory

    Numerical Integration and Harmonic Oscillator

    Xiaonu Xiong

    1

  • Part I: Numerical Integration

    2

  • Category of numerical integration algorithm

    Deterministic algorithm

    Return identical results with the same setup, predictable

    example: Rectangle approximation, Trapezium approximation : high performance in low dimension, reliable error estimation : poor performance in high dimension

    Non-Deterministic algorithm

    Return different results with the same setup, unpredictable

    example: MISER algorithm, Vegas algorithm : high performance in high dimension : usually slow, probability-based error estimation

    3

  • Deterministic algorithm

    1-d Example integrand:

    Rectangle approximation (middle-point rule)

    dx f(x) =∫a

    b

    δxf a+ i+ δx , δx =N→∞lim

    i=1

    ∑N

    ( (21

    ) )N

    b− a

    f(x) = exp(−x ) +2 exp[− x− ]( 21)4

    4

  • 1-d Example integrand:

    Trapezium approximation

    dx f(x) =∫a

    b

    f a+ iδx + f a+ i+ 1 δx , δx =N→∞lim

    i=1

    ∑N

    2δx[ ( ) ( ( ) )]

    N

    b− a

    f(x) = exp(−x ) +2 exp[− x− ]( 21)4

    5

  • 2-d Example integrand:

    Cuboid approximation

    dx dy f(x, y) =∫a

    b

    ∫c

    d

    δxδyf(a+N ,N →∞1 2lim

    i,j=1

    ∑N ,N1 2

    iδx, c+ jδy), δx = , δy =N1

    b− aN2

    d− c

    f(x) = exp(−x − y ) +2 2 exp[− x − − y − ]( 21 )4 ( 21 )

    4

    6

  • Error estimation

    1-dimension

    rectangle and trapezium approximation: error

    Simposon rule: error

    d-dimension

    error , : scaling power in 1-d case

    Fit : Fit : Fit :

    ∼ O N( pnts−2 )

    ∼ O N( pnts−4 )

    ∼ O N( pnts−λ /d1 ) λ1

    ln err = λ lnN +1 pnts−λ /d1 C

    ⟹ err = e NC pnts−λ /d1

    d = 1 λ /d =1 1.985

    d = 2 λ /d =1 0.992

    d = 3 λ /d =1 0.668

    7

  • Monte-Carlo Integration

    Hit-or-Miss method

    dx f(x) =∫a

    b

    ×N→∞lim

    N

    N0

  • Hit-or-Miss method

    N =Δx→0lim x

  • Hit-or-Miss method

    Defects:

    lots of sampling points are wasted in non-essential regions and becomeworse as dimension grows

    Requires , prrior information of M ≥ max(f) f(x)

    10

  • Hint from

    dx f(x) =∫a

    b

    =⟨p(x)f(x)

    ⟩ dx p(x)∫a

    b

    p(x)f(x)

    If we chose closely assemble the profile of , or equivalently israughly a constant, then the wasted sampling points are largely reduced.

    Impotance samplingSampling according to the importance of integrand to integration

    larger/smaller f(x) ↔ larger/smaller p(x)

    p(x) f(x) f(x)/p(x)

    xi

    11

  • Impotance sampling

    Approximate the profile of integrand by some probability distributionfunction

    Generate sample points according to

    Integral estimator

    dx f(x) =∫a

    b

    dx p(x) =∫a

    b

    [p(x)f(x)

    ] =⟨p(x)f(x)

    ⟩N

    1

    i=1

    ∑N

    p(x )i

    f(x )i

    reduce to uniform sampling case and

    does not necessarily resemble the profile of exactly.

    "exactly" means , but impossible before theintegration is known

    f(x)

    p(x)

    {x ,x ,⋯ ,x }1 2 N p(x)

    p(x ) =i (a − b)−1 dx f(x) =∫ab

    f(x )Nb−a ∑i i

    p(x) f(x)

    p(x) = f(x)/ dx f(x)∫

    12

  • Error estimation of Monte-Carlo algorithm

    Theoretical base: Central Limit TheoremSuppose , , ❶ Sampling samples: ❷ Repeat ❶ for times

    The distribution of approaches if is large enough

    Demonstation: ,

    Red: , , Blue: Histrogram distribution of

    , ,

    Distribution of Distribution of

    ξ ∼ p(ξ) μ = dξ ξp(ξ)∫ σ =2 dx (ξ −∫ μ) p(x)2

    N ξ , ξ ,⋯ , ξ ⟹1 2 N =μ~ ξ /N∑i=1N

    i

    M ⟹ , ,⋯ ,μ~1 μ~2 μ~M

    μ~ N μ,σ/( N) N

    M = 2 × 104 N = 6 × 10 ∼3 6 × 104

    exp[−x /2σ ]σ2π N

    1 2N2 σ =N σ/ N μ~

    ξ ∼ U(0, 1) ξ ∼ χ(4)

    μ = 1/2 σ = 1/2 3 μ = 4 σ = 2 2

    μ~ μ~

    13

  • Error estimation of Monte-Carlo algorithm

    Error estimation is in probability sense: the true value has 68.3% probabilitylying in

    Error scaling ~ and DOES NOT dependt on the dimension

    Increase accuracy by: : error/2 requires , : variance reduction

    − +N

    σ μ ≤ ≤μ~ +N

    σ μ

    O (N

    1 )

    N ↑ 4N σ ↓

    14

  • Importance sampling

    1-D example:

    red: taget

    light-blue: histogram distribution of sample

    when the growing larger, the histogram distribution .

    x ∼ p(x) = e2π1 − 2

    x2

    p(x)

    Npnts → p(x)

    15

  • Importance sampling

    prior information of integrand is required for choosing a proper .

    difficult to choose a proper in high dimensional integration case.

    Can not adopt to different integrands with distinct profile.

    SolutionAn adaptive sampling algorithm that adjust itself to mimic different integrands

    Vegas Algorithm

    p(x)

    p(x)

    16

  • Vegas Algorithm

    G.P. Lepage, A New Algorithm for Adaptive Multidimensional Integration,Journal of Computational Physics 27, 192–203, (1978)

    No prior information about the profile of (integrand) is required,automatically generate approximating 's profile.

    Converges faster than naive hit-or-miss sampling.

    f(x)

    p(x) f(x)

    17

    https://www.sciencedirect.com/science/article/pii/0021999178900049

  • Vegas Algorithm

    ➊ Uniformly generate for and

    ➋ Subdividing into intervals, where

    mi

    fī

    = K − 1 log ,⎣⎢⎡(

    Δx∑j f̄ j j

    Δxf̄ i i )(Δx∑j f̄ j j

    Δxf̄ i i )

    −1

    ⎦⎥⎤α

    = f(x) ≈ dx f(x)x∈[x ,x ]i−1 i

    ∑ ∣ ∣Δxi

    1∫xi

    xi+1

    ∣ ∣

    Empirically, ,

    There exists other form of suggesting , e.g.

    m =i K (Δx∑j f̄ j j

    Δxf̄ i i )

    Practically, is used to damp the subdividing procedure and avoid rapidchange of intervals' size (introducing Unstable effects)

    x = x ,⋯ ,x1 N−1 0 < x

  • Vegas Algorithm

    ➌ Merge all subdivisions back into intervals

    ➍ Repeat ➊ ➋ ➌ untill optimal grid is reached ( become constant withrespect to iteration number)

    In ➌, adjusting when .

    As shown in the animation, as iteration proceed, the region has larger(smaller) contribution to the integral gets denser subdivision and smallerintervals.

    N x = x ,x ,⋯ ,x1′

    2′

    N−1′

    mi

    N ≈dvsns const ⟹ Δx ↓i ↑ ∣f(x)∣ ↑↓

    19

  • Vegas AlgorithmMulti-dimension case (e.g. )

    Assume the probability distribution is factorizable along axes

    p(x, y) = p (x)p (y)x y

    Replacing in step 2 of 1 dimensional case by

    ≡f̄x,i2 ≈

    x∈[x ,x ]i−1 i

    ∑y∈[0,1]

    ∑p (y)y 2f(x, y)2

    dx dy f(x, y)Δxi

    1∫xi

    xi+1

    ∫0

    12

    ≡f̄ y,i2 ≈

    x∈[y ,y ]i−1 i

    ∑x∈[0,1]

    ∑p (x)x 2f(x, y)2

    dy dx f(x, y)Δyi

    1∫yi

    yi+1

    ∫0

    12

    Then calculate the corresponding , and flow step ➌, ➍ in -direction separately

    2 − d

    fī

    mx,i my,i x, y

    20

  • Vegas AlgorithmMulti-dimension case (e.g. 2-d)

    Demo animation on adaptive griding

    21

  • Vegas Algorithm

    Numerical libraries

    CPU: GNU Scientific Library (GSL)CUBA, a library for multidimensional numerical integration

    GPU: gVegas by J. Kanzaki

    Why GPU?

    mi

    fī

    = K − 1 log ,⎣⎢⎡(

    Δx∑j f̄ j j

    Δxf̄ i i )(Δx∑j f̄ j j

    Δxf̄ i i )

    −1

    ⎦⎥⎤α

    = f(x) ≈ dx f(x)x∈[x ,x ]i−1 i

    ∑ ∣ ∣Δxi

    1∫xi

    xi+1

    ∣ ∣

    performs evaluation of integrand on points , suitable forparallel execution.

    O(10 ∼4 10 )6 xi

    22

    https://www.gnu.org/software/gsl/doc/html/montecarlo.html#vegashttps://www.sciencedirect.com/science/article/pii/S0010465505000792https://arxiv.org/abs/1010.2107

  • CPU vs GPU

    CPU (i9-9900K) GPU (TU-102)

    die

    360mm2

    770mm2

    cores 8 4352

    performance(single/double precesion)

    1.22/0.61 TFLOPS 26.9/13.45 TFLOPS

    capabilitycomplicate operations

    (Ph. D students)simple operations

    (middle school students)

    task:efficient and precise unable to process

    task:square of

    slow very fast

    dxe , y (t)+∫ −x ′′ ky(t)=0

    π+(1,2,⋯ ,4000)

    23

  • Importance sampling vs Stratified sampling

    Importance sampling

    Finer sampling points where the integrand value is larger

    Stratified sampling (variance reduction)

    Finer sampling points where the potential error is larger

    Routine using stratified sampling

    GSL MISER

    Divonne in CUBA Library

    GSL VEGAS also apply stratified sampling after adaptive importancesampling for error reduction

    24

    https://www.gnu.org/software/gsl/doc/html/montecarlo.html#miserhttp://www.feynarts.de/cuba/https://www.gnu.org/software/gsl/doc/html/montecarlo.html#vegas

  • Monte-Carlo integration - Integral and error estimator

    I = dx f(x) =∫ dx p(x) =∫p(x)f(x)

    ≈⟨p

    f⟩

    Npnts

    1

    i=1

    ∑Npnts

    p(x )i

    f(x )i

    In each iteration

    I = , σ =Npnts

    1

    i=1

    ∑Npnts

    p(x )i

    f(x )i 2 E −(p2f 2

    ) E =(p

    f)2

    − IN − 1pnts

    1

    ⎣⎢⎡

    i=0

    ∑N −1pnts

    p(x )i 2f(x )i 2 2

    ⎦⎥⎤

    For accumulated all iterations

    I = I /i=0

    ∑n−1

    i (σi2

    I i2

    )i=0

    ∑n−1

    σi2

    I i2

    25

  • Monte-Carlo integration - error scaling, which is independent of dimension .

    For , Monte-Carlo algorithm beats deterministic method (rectangle,trapezium approximation)

    Probability theory utilities are available

    Minimize error

    precisely matches 's profile , ,

    HOWEVER,

    Integrating is closely related to sampling distribution

    σ ∼I O N( pnts−1/2) d

    d > 4

    p (x)prfct f(x) ⟺ =p (x)prfct

    f(x)C =⟨

    pprfct

    f⟩ C σ = 0

    C = dx f(x)∫a

    b

    f(x)p (x)prfct

    26

  • Part II Application: Sampling a distribution

    27

  • Random number generationTrue Random Number Generator (TRNG)

    Need hardware support

    Human inputs: mouse movements, network trafic..., The BIG Bell Test

    Pseudo Random Number Generator (PRNG)Deterministic

    Hard to predict if one do not know which algorithm is used

    Randomness test

    Quasi random numbers, low discrepancy sequence ...

    28

    https://www.nature.com/articles/s41586-018-0085-3

  • Random number generation

    True Random Number Generator (TRNG)

    Need hardware support (physical quantum effect on board)

    Pseudo Random Number Generator (PRNG)Deterministic

    Hard to predict if one do not know which algorithm is used

    Randomness test

    Quasi random numbers, low discrepancy sequence ...29

  • Sampling uniform distribution Lin ear Con gru en tial Generator (LCG)

    Mersenne twister

    Xorshift and it's variants (as used in gVegas )

    ...

    U(0, 1)

    30

  • Sampling an arbitary distribution Metropolis-Hastings algorithm➊ Initialize , =0➋ for i=0, i

  • Part III Application: Path integral of 1-D quantum harmonic oscillator

    32

  • Action of Harmonic oscillator

    S = dtL(x, ) =∫ ẋ dt −∫2

    mẋ2x2k 2

    Wick rotation ,

    S =E − dτ +∫ 2mẋ2

    x =2k 2 − dτH∫

    Natural Unit

    Propagator

    x , τ ∣x , τ =⟨ f f i i⟩ Dx(τ) e∫−S x(t)E [ ]

    τ = it =dtd i

    dτd

    ℏ = c = 1

    33

  • Position space eigen function

    x, τ ∣x, τ =⟨ f i⟩

    =

    ⟨x∣e ∣x⟩ = ⟨x∣e ∣n⟩⟨n∣x⟩−H τ −τ( f i)

    n=0

    ∑∞

    −H τ −τ( f i)

    ⟨x∣n⟩ e = ψ (x) en=0

    ∑∞

    ∣ ∣2 −E (τ −τ )n f in=0

    ∑∞

    ∣ n ∣2 −E (τ −τ )n f i

    For large enough  , the ground state survives

    x, τ ∣x, τ =⟨ f i⟩ Dx(τ)e ≈∫−S [x(τ)]E ψ (x) e∣ 0 ∣

    2 −E (τ −τ )0 f i

    and the ground state energy

    dx x, τ ∣x, τ =∫ ⟨ f i⟩ dx∣ψ (x)∣ e =∫ 02 −E (τ −τ )0 f i e−E (τ −τ )0 f i

    τ −f τi

    34

  • Discretized form of path integral

    Dx(τ)e =∫ −S [x(τ)]E dx exp −δτ + xN→∞lim [

    2π(τ − τ )f i

    N]N/2

    ∫k=1

    ∏N−1

    k { [k=0

    ∑N−1

    21

    (δτ

    x − xk+1 k )2

    21

    k2]}

    Discretized timespace Discretized action (Euclidean)

    ,

    Functional path integral dimensional numerical integration

    Dx(τ) =∫ dxN→∞lim ∫

    k=1

    ∏N−1

    k =S~E δτ + x[

    k=0

    ∑N−1

    21

    (δτ

    x − xk+1 k )2

    21

    k2]

    x =k x(τ )k x =0 x =i x =f x =N x δτ = (τ −f τ )/Ni

    ↔ N − 1

    35

  • Functional path integral dimensional numerical integration

    One set of defines one path

    In lattice QFT terminology: path field configuration

    HO Field

    position distributed over time field strength distributed over spacetime

    ↔ N − 1

    x =1 x(τ ),x =1 2 x(τ ),⋯ ,x (τ )2 N N

    x(τ) ϕ(x, τ)

    x(τ ),x(τ ),⋯ ,x(τ )1 2 N ϕ(x , τ ),ϕ(x , τ ),⋯ ,ϕ(x , τ )1 1 1 2 N ′ N

    36

  • Numerical Integration

    Integrand is a (very) high dimensional function

    F x ,x ,⋯ ,x =( 1 2 N−1) exp − + x[i=0

    ∑N−1

    21

    (δτ

    x − xk+1 k )2

    21

    k2]

    Suppose we want to extract and

    Requires large

    Smaller discretization error ( ) large large

    Need Monte-Carlo integration !

    ψ (x)0 E0

    τ −f τi

    δt ⟺ N =δτ

    τ −τf i ⟺ d

    37

  • Path integral - Vegas integration

    e.g. intermediate , ,

    Red: Vegas integration

    Black: canonical quantization

    N = 9 x T = τ −f τ =i 4 δτ = 0.5

    x, τ ∣x, τ⟨ f i⟩

    ∣ψ (x)∣ e =0 2 −E T0 (2π) e−1/2 −x −E (τ −τ )2

    0 f i

    38

  • Since Monte-Carlo integration is closely related to sampling

    x(τ) ∼ p x τ =[ ( )] e / Dx(t) e−S[x(τ)] ∫ −S[x(τ)]

    The expectation value of observable

    O =⟨ ⟩Dx(τ) e∫ −S[x(τ)]

    Dx(τ) e O[x(τ)]∫ −S[x(τ)]

    If we can generate (each denotes one set of )

    Observable 's estimator

    O ≈⟨ ⟩ O xN

    1

    k=1

    ∑N

    ( (k))

    How to generate desired configurations Metropolis-Hasting algorithm and it's variants e.g.Hybrid Monte-Carlo (HMC), https://arxiv.org/abs/1701.02434v2

    O x(t)[ ]

    {x ,x ,⋯ ,x } ∼(1) (2) (N) p[x(τ)] x(k)

    x(τ ),⋯ ,x(τ )1 N

    O⟨ ⟩

    {x ,x ,⋯ ,x } ∼(1) (2) (N) p[x(τ)]

    39

    https://arxiv.org/abs/1701.02434v2

  • ExerciseTry GSL Vegas (CPU) and gVeags (GPU) to calculate the path integral of 1-dquantum oscillator with finxed initial and final position, compare with analyticalresultsYou need to fill in the correct integrand for path integral

    CPU in Exercise/CPU/hrmnc_oscltr.cpp

    GPU in Exercise/GPU/gVegasFunc.cu

    How to extract the ground state wave function and ground state energy ?, try your solutions

    ψ (x)0E0

    40

  • InstructionsCompile:

    Change compile file compile.sh access permission to executable

    chmod +x ./compile.sh

    Compile

    ./compile.sh

    Launch calculation:

    hrmnc_oscltr.o Nt dt xmax Nx Npnts

    Nt , dt : number of discretization and time step size xmax , Nx : maxium -value and number of between and Npnts : number of points used in Vegas integration

    Output data file CPU_hrmnc_oscltr.dat

    x(τ )k dt = τ −k+1 τkx x x = 0 x = xmax

    41

  • A example python script for compile, run and analysis

    Change hrmnc_oscltr.py access permission to executable

    chmod +x ./hrmnc_oscltr.py

    Usage

    ./hrmnc_oscltr.py DEVICE MODE

    DEVICE : source directory CPU or GPU

    MODE : compile , run , analysis or clean

    Example: hrmnc_oscltr.py CPU run

    Parameters in hrmnc_oscltr.py , check previous slide

    Output data file CPU_hrmnc_oscltr.dat

    42

  • Metropolis-Hastings example

    Generating

    {x ,x ,⋯ ,x } ∼(1) (2) (N) p[x(τ)] =Dx(τ)e∫ −S[x(τ)]e−S[x(τ)]

    Calculate the 2-point correlation function

    C (n) =2 x xNτ

    1

    j

    ∑ ⟨ (j+n)modNτ j⟩

    Extract the energy level

    ΔE =n lnGn+1

    Gn

    ΔE = E −1 E0

    43

  • 44

  • 45