finite difference methods - imperial college london

40
MSc Course in Mathematics and Finance Imperial College London, 2010-11 Finite Difference Methods Mark Davis Department of Mathematics Imperial College London www.ma.ic.ac.uk/mdavis Part II 3. Numerical solution of PDEs 1

Upload: others

Post on 04-Feb-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Finite Difference Methods - Imperial College London

MSc Course in Mathematics and Finance

Imperial College London, 2010-11

Finite Difference Methods

Mark Davis

Department of Mathematics

Imperial College London

www.ma.ic.ac.uk/∼mdavis

Part II

3. Numerical solution of PDEs

1

Page 2: Finite Difference Methods - Imperial College London

4. Numerical Solution of Partial Differential Equations

1. Diffusion Equations of One State Variable.

∂u

∂t= c2 ∂2u

∂x2, (x, t) ∈ D , (1)

where t is a time variable, x is a state variable, and u(x, t) is an unknown

function satisfying the equation.

To find a well-defined solution, we need to impose the initial condition

u(x, 0) = u0(x) (2)

and, if D = [a, b] × [0,∞), the boundary conditions

u(a, t) = ga(t) and u(b, t) = gb(t) , (3)

where u0, ga, gb are continuous functions.

2

Page 3: Finite Difference Methods - Imperial College London

If D = (−∞,∞) × (0,∞), we need to impose the boundary conditions

lim|x|→∞

u(x, t) e−a x2

= 0 for any a > 0. (4)

(4) implies u(x, t) does not grow too fast as |x| → ∞.

The diffusion equation (1) with the initial condition (2) and the boundary

conditions (3) is well-posed, i.e. there exists a unique solution that depends

continuously on u0, ga and gb.

3

Page 4: Finite Difference Methods - Imperial College London

2. Grid Points.

To find a numerical solution to equation (1) with finite difference methods,

we first need to define a set of grid points in the domain D as follows:

Choose a state step size Δx = b−aN (N is an integer) and a time step size Δt,

draw a set of horizontal and vertical lines across D, and get all intersection

points (xj, tn), or simply (j, n),

where xj = a + j Δx, j = 0, . . . , N , and tn = n Δt, n = 0, 1, . . ..

If D = [a, b]× [0, T ] then choose Δt = TM (M is an integer) and tn = n Δt,

n = 0, . . . ,M .

4

Page 5: Finite Difference Methods - Imperial College London

x0 = a x1 x2 . . . xN = b

t0 = 0

t1

t2

t3

...

tM = T

5

Page 6: Finite Difference Methods - Imperial College London

3. Finite Differences.

The partial derivatives

ux :=∂u

∂xand uxx :=

∂2u

∂x2

are always approximated by central difference quotients, i.e.

ux ≈un

j+1 − unj−1

2 Δxand uxx ≈

unj+1 − 2 un

j + unj−1

(Δx)2(5)

at a grid point (j, n). Here unj = u(xj, tn).

Depending on how ut is approximated, we have three basic schemes: ex-

plicit, implicit, and Crank–Nicolson schemes.

6

Page 7: Finite Difference Methods - Imperial College London

4. Explicit Scheme.

If ut is approximated by a forward difference quotient

ut ≈un+1

j − unj

Δt

at (j, n),

then the corresponding difference equation to (1) at grid point (j, n) is

wn+1j = λwn

j+1 + (1 − 2 λ) wnj + λwn

j−1 , (6)

where

λ = c2 Δt

(Δx)2.

The initial condition is w0j = u0(xj), j = 0, . . . , N , and

the boundary conditions are wn0 = ga(tn) and wn

N = gb(tn), n = 0, 1, . . ..

The difference equations (6), j = 1, . . . , N − 1, can be solved explicitly.

7

Page 8: Finite Difference Methods - Imperial College London

5. Implicit Scheme.

If ut is approximated by a backward difference quotient

ut ≈un+1

j − unj

Δt

at (j, n + 1),

then the corresponding difference equation to (1) at grid point (j, n +1) is

−λwn+1j+1 + (1 + 2 λ) wn+1

j − λwn+1j−1 = wn

j . (7)

The difference equations (7), j = 1, . . . , N−1, together with the initial and

boundary conditions as before, can be solved using the Crout algorithm or

the SOR algorithm.

8

Page 9: Finite Difference Methods - Imperial College London

Explicit Method.

wn+1j − wn

j

Δt= c2 wn

j−1 − 2 wnj + wn

j+1

(Δx)2(†)

Letting λ := c2 Δt(Δx)2 gives (6).

Implicit Method.

wn+1j − wn

j

Δt= c2 wn+1

j−1 − 2 wn+1j + wn+1

j+1

(Δx)2(‡)

Letting λ := c2 Δt(Δx)2 gives (7).

In matrix form

1 + 2 λ −λ

−λ 1 + 2 λ −λ. . . . . .

−λ 1 + 2 λ

w = b .

The matrix is tridiagonal and diagonally dominant. ⇒ Crout / SOR.

9

Page 10: Finite Difference Methods - Imperial College London

6. Crank–Nicolson Scheme.

The Crank–Nicolson scheme is the average of the explicit scheme at (j, n)

and the implicit scheme at (j, n + 1).

The resulting difference equation is

−λ

2wn+1

j−1 + (1 + λ) wn+1j −

λ

2wn+1

j+1 =λ

2wn

j−1 + (1 − λ) wnj +

λ

2wn

j+1 . (8)

The difference equations (8), j = 1, . . . , N − 1, together with the initial

and boundary conditions as before, can be solved using Crout algorithm

or SOR algorithm.

10

Page 11: Finite Difference Methods - Imperial College London

Crank–Nicolson.

1

2[(†) + (‡)] gives

wn+1j − wn

j

Δt=

1

2c2 wn

j−1 − 2 wnj + wn

j+1

(Δx)2+

1

2c2 wn+1

j−1 − 2 wn+1j + wn+1

j+1

(Δx)2

Letting μ := 12 c2 Δt

(Δx)2 = λ2 gives

− μwn+1j+1 + (1 + 2 μ) wn+1

j − μwn+1j−1 = wn+1

j

and wn+1j = μwn

j+1 + (1 − 2 μ) wnj + μwn

j−1 .

This can be interpreted as

wn+1j — predictor (explicit method)

wn+1j — corrector (implicit method)

11

Page 12: Finite Difference Methods - Imperial College London

7. Local Truncation Errors.

These are measures of the error by which the exact solution of a differential

equation does not satisfy the difference equation at the grid points and are

obtained by substituting the exact solution of the continuous problem into

the numerical scheme.

A necessary condition for the convergence of the numerical solutions to the

continuous solution is that the local truncation error tends to zero as the

step size goes to zero. In this case the method is said to be consistent.

It can be shown that all three methods are consistent.

The explicit and implicit schemes have local truncation errors O(Δt, (Δx)2),

while that of the Crank–Nicolson scheme is O((Δt)2, (Δx)2).

12

Page 13: Finite Difference Methods - Imperial College London

Local Truncation Error.

For the explicit scheme we get for the LTE at (j, n)

Enj =

u(xj, tn+1) − u(xj, tn)

Δt− c2 u(xj−1, tn) − 2 u(xj, tn) + u(xj+1, tn)

(Δx)2.

With the help of a Taylor expansion at (xj, tn) we find that

u(xj, tn+1) − u(xj, tn)

Δt= ut(xj, tn) + O(Δt) ,

u(xj−1, tn) − 2 u(xj, tn) + u(xj+1, tn)

(Δx)2= uxx(xj, tn) + O((Δx)2) .

Hence

Enj = ut(xj, tn) − c2 uxx(xj, tn)︸ ︷︷ ︸

= 0

+O(Δt) + O((Δx)2) .

13

Page 14: Finite Difference Methods - Imperial College London

8. Numerical Stability.

Consistency is only a necessary but not a sufficient condition for conver-

gence.

Roundoff errors incurred during calculations may lead to a blow up of the

solution or erode the whole computation.

A scheme is stable if roundoff errors are not amplified in the calculations.

The Fourier method can be used to check if a scheme is stable.

Assume that a numerical scheme admits a solution of the form

vnj = a(n)(ω) ei j ω Δx , (9)

where ω is the wave number and i =√−1.

14

Page 15: Finite Difference Methods - Imperial College London

Define

G(ω) =a(n+1)(ω)

a(n)(ω),

where G(ω) is an amplification factor, which governs the growth of the

Fourier component a(ω).

The von Neumann stability condition is given by

|G(ω)| ≤ 1

for 0 ≤ ωΔx ≤ π.

It can be shown that the explicit scheme is stable if and only if λ ≤ 12 ,

called conditionally stable, and the implicit and Crank–Nicolson schemes

are stable for any values of λ, called unconditionally stable.

15

Page 16: Finite Difference Methods - Imperial College London

Stability Analysis.

For the explicit scheme we get on substituting (9) into (6) that

a(n+1)(ω) ei j ω Δx = λ a(n)(ω) ei (j+1) ω Δx + (1 − 2 λ) a(n)(ω) ei j ω Δx + λ a(n)(ω) ei (j−1) ω Δx

⇒ G(ω) =a(n+1)(ω)

a(n)(ω)= λ ei ω Δx + (1 − 2 λ) + λ e−i ω Δx .

The von Neumann stability condition then is

|G(ω)| ≤ 1 ⇐⇒ |λ ei ω Δx + (1 − 2 λ) + λ e−i ω Δx| ≤ 1

⇐⇒ |(1 − 2 λ) + 2 λ cos(ω Δx)| ≤ 1

⇐⇒ |1 − 4 λ sin2

(ω Δx

2

)

| ≤ 1 [cos 2 α = 1 − 2 sin2 α]

⇐⇒ 0 ≤ 4 λ sin2

(ω Δx

2

)

≤ 2

⇐⇒ 0 ≤ λ ≤1

2 sin2(

ω Δx2

)

for all 0 ≤ ω Δx ≤ π.

This is equivalent to 0 ≤ λ ≤ 12 .

16

Page 17: Finite Difference Methods - Imperial College London

Remark.

The explicit method is stable, if and only if

Δt ≤(Δx)2

2 c2. (†)

(†) is a strong restriction on the time step size Δt. If Δx is reduced to 12 Δx,

then Δt must be reduced to 14 Δt.

So the total computational work increases by a factor 8.

Example.

ut = uxx (x, t) ∈ [0, 1] × [0, 1]

Take Δx = 0.01. Then

λ ≤1

2⇒ Δt ≤ 0.00005

I.e. the number of grid points is equal to

1

Δx

1

Δt= 100 × 20, 000 = 2 × 106 .

17

Page 18: Finite Difference Methods - Imperial College London

Remark.

In vector notation, the explicit scheme can be written as

wn+1 = Awn + bn ,

where wn = (wn1 , . . . , wn

N−1)T ∈ RN−1 and

A =

1 − 2 λ λ

λ 1 − 2 λ λ. . . . . .

λ 1 − 2 λ

∈ R(N−1)×(N−1), bn =

λwn0

0...

0

λwnN

∈ RN−1 .

For the implicit method we get

B wn+1 = wn + bn+1 , where B =

1 + 2 λ −λ

−λ 1 + 2 λ −λ. . . . . .

−λ 1 + 2 λ

.

18

Page 19: Finite Difference Methods - Imperial College London

Remark.

Forward diffusion equation: ut − c2 uxx = 0 t ≥ 0.

Backward diffusion equation:

ut + c2 uxx = 0 t ≤ T

u(x, T ) = uT (x) ∀ x

u(a, t) = ga(t), u(b, t) = gb(t) ∀ t [as before]

[Note: We could use the transformation v(x, t) := u(x, T − t) in order to trans-

form this into a standard forward diffusion problem.]

We can solve the backward diffusion equation directly by starting at t = T and

solving “backwards”, i.e. given wn+1, find wn.

Implicit: wn+1 = A wn + bn Explicit: B wn+1 = wn + bn

The von Neumann stability condition for the backward problem then becomes

|G(ω)| =

∣∣∣∣

an(ω)

an+1(ω)

∣∣∣∣ ≤ 1 .

19

Page 20: Finite Difference Methods - Imperial College London

Stability of the Binomial Model.

The binomial model is an explicit method for a backward equation.

V nj =

1

R(p V n+1

j+1 + (1 − p) V n+1j−1 ) =

1

R(p V n+1

j+1 + 0 V n+1j + (1 − p) V n+1

j−1 )

for j = −n,−n + 2, . . . , n − 2, n and n = N − 1, . . . , 1, 0.

Here the initial values V N−N , V N

−N+2, . . . , VNN−2, V

NN are given.

Now let V nj = a(n)(ω) ei j ω Δx, then

a(n)(ω) ei j ω Δx =1

R

(p a(n+1)(ω) ei (j+1) ω Δx + (1 − p) a(n+1)(ω) ei (j−1) ω Δx

)

⇒ G(ω) =(p ei ω Δx + (1 − p) e−i ω Δx

)e−r Δt

=(cos(ω Δx) + (2 p − 1)

︸ ︷︷ ︸= q

i sin(ω Δx))e−r Δt

⇒ |G(ω)|2 = (cos2(ω Δx) + q2 sin2(ω Δx)) e−2 r Δt

= (1 + (q2 − 1) sin2(ω Δx)) e−2 r Δt

≤ e−2 r Δt ≤ 1

if q2 ≤ 1 ⇐⇒ −1 ≤ q ≤ 1 ⇐⇒ p ∈ [0, 1]. Hence the binomial model is

stable.

20

Page 21: Finite Difference Methods - Imperial College London

Stability of the CRR Model.

We know that the binomial model is stable if p ∈ (0, 1).

For the CRR model we have that

u = eσ√

Δt, d = e−σ√

Δt, p =R − d

u − d,

so p ∈ (0, 1) is equivalent to u > R > d.

Clearly, for Δt small, we can ensure that

eσ√

Δt > er Δt .

Hence the CRR model is stable, if Δt is sufficiently small, i.e. if Δt < σ2

r2 .

Alternatively, one can argue (less rigorously) as follows. Since Δx = uS − S =

S (eσ√

Δt − 1) ≈ S σ√

Δt and as the BSE can be written as

ut +1

2σ2 S2

︸ ︷︷ ︸c2

uSS + . . . = 0 ,

it follows that

λ = c2 Δt

(Δx)2=

1

2σ2 S2 Δt

S2 σ2 Δt=

1

2⇒ CRR is stable.

21

Page 22: Finite Difference Methods - Imperial College London

9. Simplification of the BSE.

Assume V (S, t) is the price of a European option at time t.

Then V satisfies the Black–Scholes equation (??) with appropriate initial

and boundary conditions.

Define

τ = T − t, x = ln S, w(τ, x) = eα x+β τ V (t, S) ,

where α and β are parameters.

Then the Black–Scholes equation can be transformed into a basic diffusion

equation:∂w

∂τ=

1

2σ2 ∂2w

∂x2

with a new set of initial and boundary conditions.

Finite difference methods can be used to solve the corresponding difference

equations and hence to derive option values at grid points.

22

Page 23: Finite Difference Methods - Imperial College London

Transformation of the BSE.

Consider a call option.

Let τ = T − t be the remaining time to maturity. Set u(x, τ ) = V (x, t). Then∂u∂τ = −∂V

∂t and the BSE (??) is equivalent to

uτ =1

2σ2 S2 uSS + r S uS − r u , (†)

u(S, 0) = V (S, T ) = (S − X)+ , (IC)

u(0, τ ) = V (0, t) = 0 , u(S, τ ) = V (S, t) ≈ S as S → ∞. (BC)

Let x = ln S ( ⇐⇒ S = ex). Set u(x, τ ) = u(S, τ ). Then

ux = uS ex = S uS , uxx = S uS + S2 uSS

and (†) becomes

uτ =1

2σ2 uxx + (r −

1

2σ2) ux − r u , (‡)

u(x, 0) = u(S, 0) = (ex − X)+ , (IC)

u(x, τ ) = u(0, τ ) = 0 as x → −∞ , u(x, τ ) = u(ex, τ ) ≈ ex as x → ∞. (BC)

23

Page 24: Finite Difference Methods - Imperial College London

Note that the growth condition (4), lim |x|→∞ u(x, τ ) e−a x2

= 0 for any a > 0, is

satisfied. Hence (‡) is well defined.

Let w(x, τ ) = eα x+β τ u(x, τ ) ⇐⇒ u(x, τ ) = e−α x−β τ w(x, τ ) =: C w(x, τ ).

Then

uτ = C (−β w + wτ )

ux = C (−α w + wx)

uxx = C (−α (−α w + wx) + (−α wx + wxx)) = C (α2 w − 2 α wx + wxx) .

So (‡) is equivalent to

C (−β w + wτ) =1

2σ2 C (α2 w − 2 α wx + wxx) + (r −

1

2σ2) C (−α w + wx) − r C w .

In order to cancel the w and wx terms we need to have{−β = 1

2 σ2 α2 − (r − 12 σ2) α − r ,

0 = 12 σ2 (−2 α) + r − 1

2 σ2 .⇐⇒

{α = 1

σ2 (r − 12 σ2) ,

β = 12 σ2 (r − 1

2 σ2)2 + r .

24

Page 25: Finite Difference Methods - Imperial College London

With this choice of α and β the equation (‡) is equivalent to

wτ =1

2σ2 wxx , (])

w(x, 0) = eα x u(x, 0) = eα x (ex − X)+ , (IC)

w(x, τ ) = 0 as x → −∞ , w(x, τ ) ≈ eα x+β τ ex as x → ∞. (BC)

Note that the growth condition (4) is satisfied. Hence (]) is well defined.

Implementation.

1. Choose a truncated interval [a, b] to approximate (−∞,∞).

e−8 = 0.0003, e8 = 2981 ⇒ [a, b] = [−8, 8] serves all practical

purposes.

2. Choose integers N , M to get the step sizes Δx = b−aN and Δτ = T−t

M .

Grid points (xj, τn):

xj = a + j Δx, j = 0, 1, . . . , N and τn = n Δτ , n = 0, 1, . . . ,M .

Note: x0, xN and τ0 represent the boundary of the grid with known values.

25

Page 26: Finite Difference Methods - Imperial College London

3. Solve (]) with

w(x, 0) = eα x (ex − X)+ , (IC)

w(a, τ ) = 0 , w(b, τ ) =

{e(α+1) b+β τ or

eα b (eb − X) eβ τ (a better choice). (BC)

Note: If the explicit method is used, N and M need to be chosen such that

1

2σ2 Δτ

(Δx)2≤

1

2⇐⇒ M ≥

σ2 (T − t)

(b − a)2N 2 .

If the implicit or Crank–Nicolson scheme is used, there are no restrictions

on N , M . Use Crout or SOR to solve.

4. Assume w(xj, τM), j = 0, 1, . . . , N are the solutions from step 3, then the

call option price at time t is

V (Sj, t) = e−α xj−β (T−t) w(xj, T − t︸ ︷︷ ︸τM

) j = 0, 1, . . . , N ,

where Sj = exj and T − t ≡ τM .

Note: The Sj are not equally spaced.

26

Page 27: Finite Difference Methods - Imperial College London

10. Direct Discretization of the BSE.

Exercise: Apply the Crank–Nicolson scheme directly to the BSE (??),

i.e. there is no transformation of variables, and write out the resulting

difference equations and do a stability analysis.

C++ Exercise: Write a program to solve the BSE (??) using the result of

the previous exercise and the Crout algorithm. The inputs are the interest

rate r, the volatility σ, the current time t, the expiry time T , the strike

price X, the maximum price Smax, the number of intervals N in [0, Smax],

and the number of subintervals M in [t, T ]. The output are the asset prices

Sj, j = 0, 1, . . . , N , at time t, and their corresponding European call and

put prices (with the same strike price X).

27

Page 28: Finite Difference Methods - Imperial College London

11. Greeks.

Assume that the asset prices Sj and option values Vj, j = 0, 1, . . . , N , are

known at time t.

The sensitivities of V at Sj, j = 1, . . . , N − 1, are computed as follows:

δj =∂V

∂S|S=Sj

≈Vj+1 − Vj−1

Sj+1 − Sj−1,

which isVj+1 − Vj−1

2 ΔS, if S is equally spaced.

γj =∂2V

∂S2|S=Sj

Vj+1−Vj

Sj+1−Sj− Vj−Vj−1

Sj−Sj−1

Sj − Sj−1,

which isVj+1 − 2 Vj + Vj−1

(ΔS)2, if S is equally spaced.

28

Page 29: Finite Difference Methods - Imperial College London

12. Diffusion Equations of Two State Variables.

∂u

∂t= α2

(∂2u

∂x2+

∂2u

∂y2

)

, (x, y, t) ∈ [a, b] × [c, d] × [0,∞). (10)

The initial conditions are

u(x, y, 0) = u0(x, y) ∀ (x, y) ∈ [a, b] × [c, d] ,

and the boundary conditions are

u(a, y, t) = ga(y, t), u(b, y, t) = gb(y, t) ∀ y ∈ [c, d], t ≥ 0 ,

and

u(x, c, t) = gc(x, t), u(x, d, t) = gd(x, t) ∀ x ∈ [a, b], t ≥ 0 .

Here we assume that all the functions involved are consistent, in the sense

that they have the same value at common points, e.g. ga(c, t) = gc(a, t) for

all t ≥ 0.

29

Page 30: Finite Difference Methods - Imperial College London

13. Grid Points.

(xi, yj, tn), where

xi = a + i Δx, Δx =b − a

I, i = 0, . . . , I ,

yj = c + j Δy, Δy =d − c

J, j = 0, . . . , J ,

tn = n Δt, Δt =T

N, n = 0, . . . , N

and I, J , N are integers.

Recalling the finite differences (5), we have

uxx ≈un

i+1,j − 2 uni,j + un

i−1,j

(Δx)2and uyy ≈

uni,j+1 − 2 un

i,j + uni,j−1

(Δy)2.

at a grid point (i, j, n).

30

Page 31: Finite Difference Methods - Imperial College London

Depending on how ut is approximated, we have three basic schemes: ex-

plicit, implicit, and Crank–Nicolson schemes.

31

Page 32: Finite Difference Methods - Imperial College London

14. Explicit Scheme.

If ut is approximated by a forward difference quotient

ut ≈un+1

i,j − uni,j

Δt

at (i, j, n),

then the corresponding difference equation at grid point (i, j, n) is

wn+1i,j = (1 − 2 λ − 2 μ) wn

i,j + λwni+1,j + λwn

i−1,j + μwni,j+1 + μwn

i,j−1 (11)

for i = 1, . . . , I − 1 and j = 1, . . . , J − 1, where

λ = α2 Δt

(Δx)2and μ = α2 Δt

(Δy)2.

(11) can be solved explicitly. It has local truncation error O(Δt, (Δx)2, (Δy)2),

but is only conditionally stable.

32

Page 33: Finite Difference Methods - Imperial College London

15. Implicit Scheme.

If ut is approximated by a backward difference quotient ut ≈un+1

i,j −uni,j

Δt at

(i, j, n + 1), then the difference equation at grid point (i, j, n + 1) is

(1 + 2 λ + 2 μ) wn+1i,j − λwn+1

i+1,j − λwn+1i−1,j − μwn+1

i,j+1 − μwn+1i,j−1 = wn

i,j (12)

for i = 1, . . . , I − 1 and j = 1, . . . , J − 1.

For fixed n, there are (I − 1) (J − 1) unknowns and equations. (12) can be

solved by relabeling the grid points and using the SOR algorithm.

(12) is unconditionally stable with local truncation error O(Δt, (Δx)2, (Δy)2),

but is more difficult to solve, as it is no longer tridiagonal, so the Crout

algorithm cannot be applied.

16. Crank–Nicolson Scheme.

It is the average of the explicit scheme at (i, j, n) and the implicit scheme

at (i, j, n + 1). It is similar to the implicit scheme but with the improved

local truncation error O((Δt)2, (Δx)2, (Δy)2).

33

Page 34: Finite Difference Methods - Imperial College London

Solving the Implicit Scheme.

(1 + 2 λ + 2 μ) wn+1i,j − λwn+1

i+1,j − λwn+1i−1,j − μwn+1

i,j+1 − μwn+1i,j−1 = wn

i,j

With SOR for ω ∈ (0, 2).

For each n = 0, 1, . . . , N

1. Set wn+1,0 := wn and

fill in the boundary conditions wn+1,00,j , wn+1,0

I,j , wn+1,0i,0 , wn+1,0

i,J for all i, j.

2. For k = 0, 1, . . .

For i = 1, . . . I − 1, j = 1, . . . , J − 1

wk+1i,j = 1

1+2 λ+2 μ

(wn

i,j + λwn+1,ki+1,j + λwn+1,k+1

i−1,j + μwn+1,ki,j+1 + μwn+1,k+1

i,j−1

)

wn+1,k+1i,j = (1 − ω) wn+1,k

i,j + ω wk+1i,j

until ‖wn+1,k+1 − wn+1,k‖ < ε.

3. Set wn+1 = wn+1,k+1.

34

Page 35: Finite Difference Methods - Imperial College London

With Block Jacobi/Gauss–Seidel.

Denote wj =

w1,j

w2,j...

wI−1,j

∈ RI−1, j = 1, . . . , J−1, w =

w1

w2...

wJ−1

∈ R(I−1) (J−1).

On setting c = 1 + 2 λ + 2 μ, we have from (12) for j fixed

c −λ

−λ c −λ. . .

−λ c

w1,j

w2,j...

wI−1,j

+

−μ. . .

−μ

w1,j+1

w2,j+1...

wI−1,j+1

+

−μ. . .

−μ

w1,j−1

w2,j−1...

wI−1,j−1

=

wn1,j + λwn+1

0,j

wn2,j...

wnI−2,j

wnI−1,j + λwn+1

I,j

⇐⇒ A wn+1j + B wn+1

j+1 + B wn+1j−1 = dn

j

35

Page 36: Finite Difference Methods - Imperial College London

Rewriting

A wn+1j + B wn+1

j+1 + B wn+1j−1 = dn

j j = 1, . . . , J − 1

as

A B

B A B. . . . . . . . .

B A B

B A

wn+11

wn+12......

wn+1J−1

=

dn1 − B wn+1

0

dn2...

dnJ−2

dnJ−1 − B wn+1

J

=:

dn1

dn2...

dnJ−2

dnJ−1

,

where wn+10 and wn+1

J represent boundary points, leads to the following Block

Jacobi iteration: For k = 0, 1, . . .

A wn+1,k+11 = −B wn+1,k

2 + dn1

A wn+1,k+12 = −B wn+1,k

1 − B wn+1,k3 + dn

2...

A wn+1,k+1J−2 = −B wn+1,k

J−3 − B wn+1,kJ−1 + dn

J−2

A wn+1,k+1J−1 = −B wn+1,k

J−2 + dnJ−1

36

Page 37: Finite Difference Methods - Imperial College London

Similarly, the Block Gauss–Seidel iteration is given by:

For k = 0, 1, . . .

A wn+1,k+11 = −B wn+1,k

2 + dn1

A wn+1,k+12 = −B wn+1,k+1

1 − B wn+1,k3 + dn

2...

A wn+1,k+1J−2 = −B wn+1,k+1

J−3 − B wn+1,kJ−1 + dn

J−2

A wn+1,k+1J−1 = −B wn+1,k

J−2 + dnJ−1

In each case, use the Crout algorithm to solve for wn+1,k+1j , j = 1, . . . , J − 1.

Note on Stability.

Recall that in 1d a scheme was stable if |G(ω)| =∣∣∣a

(n+1)(ω)a(n)(ω)

∣∣∣ ≤ 1, where vn

j =

a(n)(ω) ei j ω Δx.

In 2d, this is adapted to

vni,j = a(n)(ω) e

√−1 i ω Δx+

√−1 j ω Δy .

37

Page 38: Finite Difference Methods - Imperial College London

17. Alternating Direction Implicit (ADI) Method.

An alternative finite difference method is the ADI scheme, which is un-

conditionally stable while the difference equations are still tridiagonal and

diagonally dominant.

The ADI algorithm can be used to efficiently solve the Black–Scholes two

asset pricing equation:

Vt+12 σ2

1 S21 VS1S1

+12 σ2

2 S22 VS2S2

+ρ σ1 σ2 S1 S2VS1S2+r S1VS1

+r S2VS2−r V = 0.

(13)

See Clewlow and Strickland (1998) for details on how to transform the

Black–Scholes equation (13) into the basic diffusion equation (10) and then

to solve it with the ADI scheme.

38

Page 39: Finite Difference Methods - Imperial College London

ADI scheme

Implicit method at (i, j, n + 1):

wn+1i,j − wn

i,j

Δt= α2 wn

i+1,j − 2 wni,j + wn

i−1,j

(Δx)2︸ ︷︷ ︸

approx. uxx using (i, j, n) data

+α2 wn+1i,j+1 − 2 wn+1

i,j + wn+1i,j−1

(Δy)2

Implicit method at (i, j, n + 2):

wn+2i,j − wn+1

i,j

Δt= α2 wn+2

i+1,j − 2 wn+2i,j + wn+2

i−1,j

(Δx)2+α2 wn+1

i,j+1 − 2 wn+1i,j + wn+1

i,j−1

(Δy)2︸ ︷︷ ︸

approx. uyy using (i, j, n + 1) data

We can write the two equations as follows:

−μwn+1i,j+1 + (1 + 2 μ) wn+1

i,j − μwn+1i,j−1 = λwn

i+1,j + (1 − 2 λ) wni,j + λwn

i−1,j (†)

−λwn+2i+1,j + (1 + 2 λ) wn+2

i,j − λwn+2i−1,j = μwn+1

i,j+1 + (1 − 2 μ) wn+1i,j + μwn+1

i,j−1 (‡)

39

Page 40: Finite Difference Methods - Imperial College London

To solve (†), fix i = 1, . . . , I − 1 and solve a tridiagonal system to get wn+1i,j for

j = 1, . . . , J − 1.

This can be done with e.g. the Crout algorithm.

To solve (‡), fix j = 1, . . . , J − 1 and solve a tridiagonal system to get wn+2i,j for

i = 1, . . . , I − 1.

Currently the method works on the interval [tn, tn+2] and has features of an

explicit method. In order to obtain an (unconditionally stable) implicit method,

we need to adapt the method so that it works on the interval [tn, tn+1] and hence

gives values wni,j for all n = 1, . . . , N .

Introduce the intermediate time point n + 12 . Then (†) generates w

n+ 12

i,j (not

used) and (‡) generates wn+1i,j .

−μ

2w

n+ 12

i,j+1 + (1 + μ) wn+ 1

2

i,j −μ

2w

n+ 12

i,j−1 =λ

2wn

i+1,j + (1 − λ) wni,j +

λ

2wn

i−1,j (†)

−λ

2wn+1

i+1,j + (1 + λ) wn+1i,j −

λ

2wn+1

i−1,j =μ

2w

n+ 12

i,j+1 + (1 − μ) wn+ 1

2

i,j +μ

2w

n+ 12

i,j−1 (‡)

40