-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
1/323
Lectures on
Diffusion Problems and Partial Differential
Equations
By
S. R. S. Varadhan
Notes by
P. MuthuramalingamTara R. Nanda
Tata Institute of Fundamental Research, Bombay1989
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
2/323
Author
S. R. S. VaradhanCourant Institute of Mathematical Sciences
251, Mercer Street
New York. N. Y. 10012.
U.S.A.
c Tata Institute of Fundamental Research, 1989
ISBN 3-540-08773-7. Springer-Verlag, Berlin, Heidelberg. New York
ISBN 0-387-08773-7. Springer-Verlag, New York. Heidelberg. Berlin
No part of this book may be reproduced in any
form by print, microfilm or any other means with-
out written permission from the Tata Institute of
Fundamental Research, Bombay 400 005
Printed by N. S. Ray at the Book Centre Limited
Sion East, Bombay 400 022 and published by H. Goetze
Springer-Vertal, Heidelberg, West Germany
PRINTED IN INDIA
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
3/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
4/323
iv Contents
15 Equivalent For of Ito Process 105
16 Itos Formula 117
17 Solution of Poissons Equations 129
18 The Feynman-Kac Formula 133
19 An Application of the Feynman-Kac Formula.... 139
20 Brownian Motion with Drift 147
21 Integral Equations 155
22 Large Deviations 161
23 Stochastic Integral for a Wider Class of Functions 187
24 Explosions 195
25 Construction of a Diffusion Process 201
26 Uniqueness of Diffusion Process 211
27 On Lipschitz Square Roots 223
28 Random Time Changes 229
29 Cameron - Martin - Girsanov Formula 237
30 Behaviour of Diffusions for Large Times 245
31 Invariant Probability Distributions 253
32 Ergodic Theorem 275
33 Application of Stochastic Integral 281
Appendix 284
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
5/323
Contents v
Language of Probability 284
Kolmogorovs Theorem 288
Martingales 290
Uniform Integrability 305
Up Crossings and Down Crossings 307
Bibliography 317
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
6/323
vi Contents
Preface
THESE ARE NOTES based on the lectures given at the T.I.F.R.Centre, Indian Institute of Science, Bangalore, during July and August
of 1977. Starting from Brownian Motion, the lectures quickly got into
the areas of Stochastic Differential Equations and Diffusion Theory. An
attempt was made to introduce to the students diverse aspects of the
theory. The last section on Martingales is based on some additional
lectures given by K. Ramamurthy of the Indian Institute of Science. The
author would like to express his appreciation of the efforts by Tara R.
Nanda and PL. Muthuramalingam whose dedication and perseverance
has made these notes possible.
S.R.S. Varadhan
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
7/323
1. The Heat Equation
LET US CONSIDER the equation 1
(1) ut 1
2u = 0
which describes (in a suitable system of units) the temperature distribu-
tion of a certain homogeneous, isotropic body in the absence of any heat
sources within the body. Here
u u(x1, . . . , xd, t); ut ut
; u =
di=1
2
ux2
i
,
t represents the time ranging over [0, ) or [0, T] and x (x1 . . . xd)belongs to Rd.
We first consider the initial value problem. It consists in integrating
equation (1) subject to the initial condition
(2) u(0, x)=
f(x).
The relation (2) is to be understood in the sense that
Ltt0
u(t, x) = f(x).
Physically (2) means that the distribution of temperature throughout
the body is known at the initial moment of time.
We assume that the solution u has continuous derivatives, in the
space coordinates upto second order inclusive and first order derivative
in time.
1
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
8/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
9/323
3
Since equation (1) is linear with constant coefficients it is invariant
under time as well as space translations. This means that translates of
solutions are also solutions. Further, for s 0, t > 0 and y Rd
,3
(6) u(t, x) =1
[2(t+ s)]d/2exp |x y|
2
2(t + s)
and for t > s, y Rd,
(7) u(t, x) =1
[2(t
s)]d/2
exp |x y|2
2(t
s)
are also solutions of the heat equation (1).
The above method of solving the initial value problem is a sort of
trial method, viz. we pick out a solution and verify that it satisfies (1).
But one may ask, how does one obtain the solution? A partial clue to this
is provided by the method of Fourier transforms. We pretend as if our
solution u(t, x) is going to be very well behaved and allow all operations
performed on u to be legitimate.
Put v(t, x) = u(t, x) where stands for the Fourier transform in thespace variables only (in this case), i.e.
v(t, x) =
Rd
u(t,y)ei xydy.
Using equation (1), one easily verifies that
(8) vt(t, x) =1
2 |x|2v(t, x)
with
(9) v(0, x) = f(x).
The solution of equation (8) is given by
(10) v(t, x) = f(x)et|x|2 /2.
We have used (9) in obtaining (10).
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
10/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
11/323
5
v(t, x) =
t0
w(t, x, s)ds
where
w(t, x, s) =
Rd
g(s,y)1
[2(t, s)]d/2exp
|x y|
2
2(t s)
dy.
Exercise 3. Show that v(t, x) defined above solves the inhomogeneous
heat equation and satisfies v(0, x) = 0. Assume that g is sufficiently
smooth and has compact support. vt 12
v = Ltst
w(t, x, s) and now use
part (b) of Exercise (1).
Remark 1. We can assume g has compact support because in evaluating
vt 1
2v the contribution to the integral is mainly from a small neigh-
bourhood of the point (t, x). Outside this neighbourhood
1[2(t s)]d/2 exp
|x y|22(t s)
satisfies
ut 1
2u = 0.
2. If we put g(s,y) = 0 for s < 0, we recognize that v(t, x) = g p.Taking spatial Fourier transforms this can be written as
v(t, ) =
t0
g(s, )exp 12
(t s)||2d,
orv
t=
v
t= g(t, ) +
1
2v =
g(t, ) +
1
2v
.
Thereforev
t 1
2v = g.
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
12/323
6 1. The Heat Equation
Exercise 4. Solve wt 1
2w = g on [0, ) Rd with w = f on {0} Rd 6
(Cauchy problem for the heat equation).
Uniqueness. The solution of the Cauchy problem is unique provided the
class of solutions is suitably restricted. The uniqueness of the solution
is a consequence of the Maximum Principle.
Maximum Principle. Let u be smooth and bounded on [0, T] Rd sat-isfying
ut
u2
0 in (0, T] Rd and u(0, x) 0, x Rd.
Then
u(t, x) 0 , t [0, T] and x Rd.
Proof. The idea is to find minima for u or for an auxillary function.
Step 1. Let v be any function satisfying
vt v
2> 0 in (0, T] Rd.
Claim . v cannot attain a minimum for t0 (0, T]. Assume (to get acontradiction) that v(t0, x0) v(t, x) for some t0 > 0 and for all t [0, T], x Rd. At a minimum vt(t0, x0) 0, (since t0 0) v(t0, x0) 0. Therefore
vt v2 (t0, x0) 0.Thus, ifv has any minimum it should occur at t0 = 0.
Step 2. Let > 0 be arbitrary. Choose such that
h(t, x) = |x|2 + t
satisfies7ht h
2= d > 0 (say = 2d).
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
13/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
14/323
8 1. The Heat Equation
Then
u(0, x) = 0, ut =u
2, u 0,
i.e. u satisfies
ut 1
2
2u
x2= 0, with u(0, x) = 0.
This example shows that the solution is not unique because, u is
not bounded. (This example is due to Tychonoff).
Lemma 1. Let p(t, x) = 1(2t)d/2
exp |x|2
2tfor t > 0. Then
p(t, ) p(s, ) = p(t+ s, ).
Proof. Let f be any bounded continuous function and put
u(t, x) =
Rd
f(y)p(t, x y)dy.
Then u satisfies
ut 1
2u = 0, u(0, x) = f(x).
Let
v(t, x) = u(t+ s, x).
Then
vt 12
v = 0, v(0, x) = u(s, x).
This has the unique solution
v(t, x) =
u(s,y)p(t, x y)dy.
Thus
Rd
f(y)p(t+ s, x y)dy = f(z)p(s,y z)p(t, x y)dz dy.
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
15/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
16/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
17/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
18/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
19/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
20/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
21/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
22/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
23/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
24/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
25/323
4. Construction of Wiener
Measure
ONE EXAMPLE WHERE the Kolmogorov construction yields a proba- 17
bility measure concentrated on a nice class is the Brownian motion.
Definition . A Brownian motion with starting point x is an Rd-valued
stochastic process {X(t) : 0 t < } where(i) X(0) = x = constant;
(ii) the family of distribution is specified by
Ft1 . . . tk(A) =
A
p(0, x, t1, x1)p(t1, x1, t2, x2) . . .
p(tk1, xk1, tk, xk)dx1 . . . dxk
for every Borel set A in Rd Rd (k times).N.B. The stochastic process appearing in the definition above is the one
given by the Kolmogorov construction.
It may be useful to have the following picture of a Brownian motion.
The space k may be thought of as representing particles performing
Brownian movement; {Xt : 0 t < } then represents the trajectoriesof these particles in the space Rd as functions of time and Bcan be con-
sidered as a representation of the observations made on these particles.
Exercise 2. (a) Show that Ft1...tk defined above is a probability mea-
sure on Rd Rd (k times).
19
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
26/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
27/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
28/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
29/323
23
Step 3. We want to show that Px((D)) = 1. By Exercise 4(b) this is
equivalent to showing that Ltj
P(N,1/j
D(f) 1
k) = 1 for all N and k.
The lemmas which follow will give the desired result.
Lemma (Levy). Let X1, . . .Xn be independent random variables, > 0
and > 0 arbitrary. If
P(|Xr + Xr+1 + + X| ) r, such that1 r n, then
P( sup1jn |X1 + + Xj| 2) 2.
(see Kolmogorovs theorem) for every j = 1, 2, . . . , for every N = 22
1, 2, . . . and for every k = 1, 2, . . .. (Hint: Use the fact that the pro-
jections are continuous).
(b) Show that (D) =
N=1
k=1
j=1
{N,1j
D(f) 1
k} and hence (D) is
measurable in
{Rdt : t
D
}.
Let t1...tk : (D) Rd Rd Rd (k times) be the projectionsand let
Et1...tk = 1t1...tk
(B(Rd) k times
B(Rd)).
Put
E = {Et1...tk : 0 t1 < t2 < . . . < tk < ; ti D}.
Then, asEt1...tk Es1...s1 E1...m ,
where
{t1 . . . tk, s1 . . . s1} {1 . . . m},E is an algebra. Let (E) be the -algebra generated by E.
Lemma . LetB be the (topological) Borel -field of (D). Then B is
the -algebra generated by all the projections
{ti...tk : 0 t1 < t2 < . . . < tk, ti D}.
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
30/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
31/323
25
Lemma . Let{X(t)} be a Brownian motion, I [0, ) be a finite inter-val, F I D be finite. Then
Px
Supt,F
|X(t) X()| 4 C(d) |I|24
,
where |I| is the length of the interval and C(d) a constant depending onlyon d.
Remark. Observe that the estimate is independent of the finite set F.
Proof. Let F =
{ti : 0
t1 < t2 < . . . < tk
= Px(T,hD
> )
(T, , h) = C h4
Th + 1 .
Note that(T, , h) 0 as h 0 for every fixed T and .Proof. Define the intervals I1,I2, . . . by
Ik = [(k 1)h, (k+ 1)h] (0, T], k = 1, 2, . . . .Let I1,I2, . . .Ir be those intervals for which
Ij
[0, T] (j = 1, 2, . . . , r).
Clearly there are [ Th
] + 1 of them. If|t s| h then t, s Ij for some26j, 1 j r. Write D =
n=1
Fn where Fn Fn+1 and Fn is finite. Then
Px
sup|ts|h
t,s[0,T]D
|X(t) X(s)| >
= Px
n=1
sup|ts|h
t,sDFn
|X(t) X(s)| >
= supn
Pxsupj supt,sFn(|XIj (t) X(s)| > )
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
33/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
34/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
35/323
29
everything else that affects it) through a vector x. The symmetry of the
physicl laws governing this motion tells us that any property exhibited29
by the first process should be exhibited by the second process and viceversa. Mathematically this is expressed by
Theorem . Px = PT1
x .
Proof. It is enough to show that
Px(Tx1t1...tk
(A1 Ak)) = P(1t1...tk(A1 Ak))
for every Ai Borel in Rd. Clearly,
Tx1t1...tk
(A1 Ak) = 1t1...tk(A1 x Ak x).
Thus we have only to show thatA1x
. . .
Akx
p(0, x, t1, x1) . . . p(tk1, xk1, tk, xk)dx1 . . . dxk
= A1
. . .Ak
p(0, 0, t1, x1) . . . p(tk1, xk1, tk, xk)dx1 . . . dxk,
which is obvious.
Exercise. (a) If (t, ) is a Brownian motion s tarting at (0, 0) then1
(t) is a Brownian motion starting at (0, 0) for every > 0.
(b) IfX is a d-dimensional Brownian motion and Y is a d-dimensio-nal Brownian motion then (X, Y) is a d+ d dimensional Brownianmotion provided that X and Y are independent.
(c) IfXt = (X1t , . . . ,X
dt ) is a d-dimensional Brownian motion, then X
jt
is a one-dimensional Brownian motion. (j = 1, 2, . . . d).
(w) = inf{t : |Xt(w)| +1}= inf
{t :
|w(t)
| 1}
(w) is the first time the particle hits either of the horizontal lines 30
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
36/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
37/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
38/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
39/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
40/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
41/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
42/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
43/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
44/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
45/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
46/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
47/323
41
i.e.41
2P(B)
P
n
i=1 Ai ,or
P
max1in
Xi > a
2P{Xn > a}
Lemma 2. Let Yi, . . . , Yn be independent random variables. Put Xn =n
k=1Yk and let = min{i : Xi > a}, a > 0 and = if there is no such i.
Then for each > 0,
(a) P{ n 1,Xn X } P{ n 1,Xn a} +n1j=1
P(Yj > ).
(b) P{ n1,Xn > a+2} P{ n1,XnX > }+n1j=1
P{Yj > }
(c) P
{Xn > a + 2
} P
{
n
1,Xn > a + 2
}+ P
{Yn > 2
}.
If, further, Y1, . . . , Yn are symmetric, then
(d) P{max1in
Xi > a,Xn a} P{Xn > a + 2} P{Yn 2}
2n1j=1
P{Yj > }
(e) P{max1
i
n
Xi > a} 2P{Xn > a + 2} 2n
j=1P{Yj > }
Proof. (a) Suppose w { n 1,Xn X } and w { n 1,Xn a}. Then Xn(w) > a and Xn(w) + X(w)(w) or,X(w)(w) > a + .
By definition of(w), X(w)1(w) a and therefore,
Y(w)(w) = X(w)(w) X(w)1(w) > a + a =
if(w) > 1; if(w) = 1, Y(w)(w) = X(w)(w) > a + > .
Thus Yj(w) > for some j n 1, i.e. 42
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
48/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
49/323
43
=
nk=1
P{ = k}P{Xn Xk > } n1j=1
P(Yj > ) (by symmetry)
= P{ n 1,Xn X } n1j=1
P(Yj > )
P{ n 1,Xn X > } n1j=1
P(Yj > )
P{ n 1,Xn > a + 2} 2n1j=1
P{Yj > } (by (b))
P{Xn > a + 2} P{Yn > 2} 2n1
j=1 P{Yj > } (by (c)) 43This proves (d).
(e) P{max1in
Xi > a} = P{max1in
Xi > a,Xn a} + P{max1in
Xi > a,Xn > a}= P{max
1inXi > a,Xn a} + P{Xn > a}
= P{Xn > a + 2} P{Yn > 2} + P{Xn > a}
2 n1j=1
P{Yj > } (by (d))
Since P{Xn > a + 2} P{Xn > a} and
P{Yn > 2} P{Yn > } 2P{Yn > },
we get
P{max1inXi > a} 2P{Xn > a + 2} 2
nj=1
P(Yj > )
This completes the proof. 44
Proof of the reflection principle.
By Lemma 1
p = P
max1jn
X jt
n
> 2P(X(t) > a).
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
50/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
51/323
45
We let n tend to through values 2, 22, 23, . . . so that we get
2P{X(t) > a + 2
} 2n P
{X(t/n) >
}P
max1jn
X(t/n) > a
2P{X(t) > a},
or
2P{X(t) > a} 2P{X(t) a} P
max0st
X(t) > a
2P
{X(t) > a
},
on letting n + first and then letting 0. Therefore,
P
max0st
X(s) > a
= 2P{X(t) > a}
= 2
a
1/
(2t)ex2 /2tdx.
AN APPLICATION. Consider a one-dimensional Brownian motion. Aparticle starts at 0. What can we say about the behaviour of the particle
in a small interval of time [0, )? The answer is given by the following
result.
P(A) P{w : > 0, t, s in [0, ) such that Xt(w) > 0 andXs(w) < 0} = 1.
INTERPRETATION. Near zero all the particles oscillate about their 46starting point. Let
A+ = {w : > 0 t [0, ) such that Xt(w) > 0},A = {w : > 0 s [0, ) such that Xs(w) < 0}.
We show that P(A+) = P(A) = 1 and therefore P(A) = P(A+ A) = 1.
A+ n=1
sup0t1/n
w > 0 = n=1
m=1
sup0t1/n
w(t) 1/m
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
52/323
46 7. Reflection Principle
Therefore
P(A+
) Ltn supm P sup0t1/n w(t) 1/m 2 Lt
nmsup P(w(1/n) 1/m) (by the reflection principle)
1.Similarly P(A) = 1.
Theorem . Let{Xt} be a one-dimensional Brownian motion, A (, a)(a > 0) and Borel subset ofR. Then
P0{Xt A,Xs < a s such that0 s t}
=
A
1/
(2t)ey2/2tdy
A
1/
(2t)e(2ay)2 /2tdy
Proof. Let (w) = inf{t : w(t) a}. By the strong Markov property ofBrownian motion,
P0{B(X( + s) X() A)} = P0(B)P0(X(s) A)for every set B in Ft. This can be written as
E(X(X(+s)X()A)|F) = P0(X(s) A)Therefore47
E(X(X(+(w))X()A)|F) = P0(X((w)) A)for every function (w) which is F-measurable. Therefore,
P0(( t) ((X( + (w)) X()) A) =
{t}
P0(X((w)) A)dP(w)
In particular, take (w) = t (w), clearly (w) is F-measurable.Therefore,
P0(( t)((X(t) X()) A)) = {t}
P0(X((w) A)dP(w)).
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
53/323
47
Now X((w)) = a. Replace A by A a to get
(*) P0(( t) (X(t) A)) = {t}
P0(X((w) A a)dP(w))
Consider now
P2a(X(t) A) = P0(X(t) A 2a)
= P0(X(t) 2a A) (by symmetry ofx)= P0(( t) (X(t) 2a A)).
The last step follows from the face that A (, a) and the conti-nuity of the Brownina paths. Therefore
P2a(X(t)
A) = {t} P0(X((w)) a A)dP(w), (using )= P0(( t) (X(t) A)).
Now the required probability
P0{Xt A,Xs < a s 0 s t} = P0{Xt A} P0{( t) (Xt A)}
=A
1/(2t)ey2 /2tdy A
1/(2t)e(2ay)2 /2tdy.
The intuitive idea of the previous theorem is quite clear. To obtain 48
the paths that reach A at time twithout hitting the horizontal line x = a,
we consider all paths that reach A at time tand subtract those paths that
hit the horizontal line x = a before time t and then reach A at time t. To
see exactly which paths reach A at time tafter hitting x = a we consider
a typical path X(w).
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
54/323
48 7. Reflection Principle
The reflection principle (or the strong Markov property) allows us
to replace this path by the dotted path (see Fig.). The symmetry of the
Brownian motion can then be used to reflect this path about the line
x = a and obtain the path shown in dark. Thus we have the following
result:
the probability that a Brownian particle starts from x = 0 at t = 0
and reaches A at time t after it has hit x = a at some time t is thesame as if the particle started at time t = 0 at x = 2a and reached A at
time t. (The continuity of the path ensures that at some time t, thisparticle has to hit x = a).
We shall use the intuitive approach in what follows, the mathemati-49
cal analysis being clear, thorugh lengthy.
Theorem . Let X(t) be a one-dimensional Brownian motion, A
(
1, 1)
any Borel subset ofR. Then
P0
sup
0st|X(s)| < 1,X(t) A
=
A
(t,y)dy,
where
(t,y) =
n=(1)n/
(2t)e(y2n)
2 /2t.
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
55/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
56/323
50 7. Reflection Principle
=
A
1/
(2t)ey2 /2tdy P[(E1 F1) A0],
where
A0 = {X(t) A} =A
1/
(2t)ey2 /2tdy P[(E1 A0) (F1 A0)].
Use the fact that P[A B] = P(A) + P(B) P(A B) to get
(t,A) =A
1/
(2t)ey2 /2tdyP[E1A0]P[F1A0]+P[E1F1A0],
as E1 F1 = E2 F2. Proceeding successively we finally get
(t,A) = A 1/
(2t)ey2 /2tdy+
n=1(1)n P[En
A0]+
n=1(1)nP[Fn
A0]
We shall obtain the expression for P(E1 A0) and P[E2 A0], theother terms can be obtained similarly.
E1 A0 consists of those trajectries that hit x = 1 at some time t and then reach A at time t. Thus P[E1 A0] is given by theprevious theorem by
A
1/
(2t)e(y2)2 /2tdy.
E2 A0 consists of those trajectories that hit x = 1 at time 1, hit51x = 1 at time 2 and reach A at time t(1 < 2 < t).
According to the previous theorem we can reflect the trajectory upto
2 about x =
1 so that P(E2
A0) is the same as if the particle starts at
x = 2 at time t = 0, hits x = 3 at time 1 and ends up in A at time t.We can now reflect the trajectory
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
57/323
51
upto time 1 (the dotted curve should be reflected) about x = 3 toobtain the required probability as if the trajectory started at x = 4.Thus,
P(E2 A0) = A
e(y+
4)2
/2t/(2t)dy.
Thus
(t,A) =
n=
(1)nA
1/
2te(y2n)2 /2tdy
= A (t,y)dy.The previous theorem leads to an interesting result: 52
P
sup
0st|X(s)| < 1
=
11
(t,y)dy
Therefore
P
sup0st
|X(s)| 1 = 1 P sup0st
|X(s)| < 1
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
58/323
52 7. Reflection Principle
= 1 1
1
(t,y)dy,
(t,y) =
n=
(1)n/(2t)e(y2n)2 /2t
Case (i). t is very small.
In this case it is enough to consider the terms corresponding to n = 0,
1 (the higher order terms are very small). As y varies from 1 to 1,
(t,y) 1/(2t) ey2/2t e(y2)2/2t e(y+2)2 /2t .Therefore
11
(t,y)dy 4/(2t)e1/2t.
Case (ii). t is large. In this case we use Poissons summation formula
for (t,y):
(t,y) =
k=0
e(2k+1)2 2t/8Cos{(k+ 1)/2y},
to get1
1 (t,y)dy 4/e2t/8
for large t. Thus, P( > t) = 4/e2t/8.53
This result says that for large values of t the probability of paths
which stay between 1 and +1 is very very small and the decay rate isgoverned by the factor e
2t/8. This is connected with the solution of a
certain differential equation as shall be seen later on.
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
59/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
60/323
54 8. Blumenthals Zero-One Law
is clearly dense in L2(,B, P).
Proof of zero-one law. Let55
Ht = L2(,Ft, P),H = L
2(,B, P),H0+ =t>0
Ht.
Clearly H0+ = L2(,F0+, P).
Let t : H Ht be the projection. Then tf 0+ff in H.To prove the law it is enough to show that H0+ contains only constants,
which is equivalent to 0+f = constant f in H. As 0+ is continuousand linear it is enough to show that 0+ = const of the Lemma 2:
0+ = Ltt0
t = Ltt0
E(|t) by Lemma 1,= Lt
t0E((t1, . . . , tk)|Ft).
We can assume without loss of generality that t < t1
< t2
< . . . < tk
.
E((t1, . . . , tk)|Ft) =
(y1, . . . ,yk)1/
(2(t1 t))e|y1Xt(w)|2/2(t1t) . . .
. . . 1/
(2(tk tk1))e|ykyk1 |2
2(tktk1 ) dy1 . . . dyk.
Since X0(w) = 0 we get, as t 0,
0+ = constant.
This completes the proof.
APPLICATION. Let 1A = {w :1
0
|w(t)|/t < }. Then A F0+.
For, if 0 < s < 1, then1
s|w(t)|/t < . Therefore w A or not according
ass
0
|w(t)|/tdt converges or not. But this convergence can be asserted56
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
61/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
62/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
63/323
9. Properties of Brownian
Motion in One Dimension
WE NOW PROVE the following. 57
Lemma . Let(Xt) be a one-dimensional Brownian motion. Then
(a) P(limXt = ) = 1; consequently P(lim Xt < ) = 0.
(b) P(limXt
=
) = 1; consequently P(lim X
t>
) = 0.
(c) P(limXt = ); lim Xt = ) = 1.
SIGNIFICANCE. By (c) almost every Brownian path assumes each
value infinitely often.
Proof.
{limXt = } =
n=1
(lim Xt > n)
=
n=1
( limrational
X > n) (by continuity of Brownian paths)
First, note that
P0
sup0st
X(s) n = 1 P0 sup0st
X(s) > n
57
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
64/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
65/323
59
Remark . If d 3 we shall see later that P( Ltt
|Xt| = ) = 1. i.e.almost every Brownian path wanders off to
.
Theorem . Almost all Brownian paths are of unbounded variation in
any interval.
Proof. Let I be any interval [a, b] with a < b. For n = 1, 2, . . . define
Vn(wQn) =
ni=1
|w(ti) w(ti1)| (ti = a + (b a)i/n, i = 0, 1, 2, . . . n),
The variation corresponding to the partioin Qn dividing [a, b] into n 59
equal parts. Let
Un(w, Qn) =
ni=1
|(w(ti) w(ti1)|2.
If
An(w, Qn) sup1in |w(ti) w(ti1)|,
then
An(w, Qn)Vn(w, Qn) Un(w, Qn).By continuity Lt
nAn(w, Qn) = 0.
Claim. Ltn
E[(Un(w, Qn) (b a))2] = 0.
Proof.
E[(Un (b a))2]
= E
nj=1
[(Xtj Xtj1 )2 (b a/n)]
2
E[((Z2j b a/n))2], Zj = Xtj Xtj1 ,= nE[(Z21 b a/n)2]
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
66/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
67/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
68/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
69/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
70/323
64 10. Dirichlet Problem and Brownian Motion
Proof. The distributions {Ft1,...,tk} defining Brownian motion are invari-62ant under rotations. Thus S(0, ) is a rotationally invariant probabilitymeasure. The result follows from the fact that the only probability mea-sure (on the surface of a sphere) that is invariant under rotations is the
normalised surface area.
Theorem . Let G be any bounded region, f a bounded measurable real
valued function defined on G. Define u(x) = Ex(f(XG )). Then
(i) u is measurable and bounded;
(ii) u has the mean value property; consequently,
(iii) u is harmonic in G.
Proof. (i) To prove this, it is enough to show that the mapping x Px(A) is measurable for every Borel set A.
Let C = {A B : x Px(A) is measurable}
It is clear that 1
t1 ,...,tk(B) C
, Borel set B in Rd
Rd
. AsC is a monotone class C = B.
(ii) Let S be any sphere with centre at x, and S G. Let = Sdenote the exit time through S. Clearly G. By the strongMarkov property,
u(X) = E(f(XG )|F).
Now
u(x) = Ex(f(XG )) = Ex(E(f(XG ))|F)
= Ex(u(X)) =
S
u(y)S(x, dy)
=1
|S| Su(y)dS;
|S
|= surface area ofS.
63
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
71/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
72/323
66 10. Dirichlet Problem and Brownian Motion
As Ans are increasing, Bns are decreasing and Bn F1/n; so thatB F0+. We show that P(B) > 0, so that by Bluementhals zero-onelaw, P(B) = 1, i.e. P(A) = 0.
Py(B) = limn Py(Bn) limn Py{w : w(0) = y, w(
1
2n) Ch {y}}
Thus
Py(b) lim
Ch{y}
1/
(2/2n)d exp(|z y|2/2/2n)dz
=
C
1/
(2)e|y|2 /2dy,
where C is the cone of infinite height obtained from Ch. Thus Py(B) >0.
Step 2. If C is closed then the mapping x Px(C) is upper semi-continuous.
For, denote by XC the indicator function of C. As C is closed (in
a metric space) a sequence of continuous functions fn decreasing toXC such that 0 fn 1. Thus Ex(fn) decreases to Ex(XC) = Px(C).Clearly x Ex(Fn) is continuous. The result follows from the fact thatthe infimum of any collection of continuous functions is upper semi-
continuous.
Step 3. Let > 0,65
N(y; ) = {z G : |z y| < },B = {w : w(0) G,XG (w) G N(y; )},
i.e. B consists of trajectories which start at a point ofG and escape for
the first time through G at a point not in N(y; ). IfC = B, then
C {
w : w(0) = y}
A {
w : w(0) = y}
where A is as in Step 1.
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
73/323
67
For, suppose w C{w : w(0) = y}. Then there exists wn B suchthat wn w uniformaly on compact sets. Ifw A{w : w(0) = y} thereexists > 0 such that w(t) G t in (0, ]. Let = inf0t d(w(t), G N(y, )). Then > 0. If tn = G(wn) and tn does not converge to 0,then there exists a subsequence, again denoted by tn, such that tn k >0 for some k (0, 1). Since wn(k) G and wn(k), w(k) G, acontradiction. Thus we can assume that tn converges to 0 and also that
tnn, But then(*) |wn(tn) w(tn)| .
However, as wn converges to w uniformly on [0, ],
wn(tn) w(tn) w(0) w(0) = 0contradicting (*). Thus w A{w : w(0) = y}.
Step 4. limxy,xG
Px(B) = 0.
For, 66
limxy
Px(B) limxy
Px(C) Py(C) (by Step 2)
= Py(C {w : w(0) = y}) Py(A) (by Step 3)= 0.
Step 5.
|u(x) f(y)| = |
f(XG (w))dPx(w)
f(y)dPx(w)|
B
|f(XG (w)) f(y)|dPx (w) + |
B
(f(XG (w)) f(y))dPx(w)|
B
|f(XG (w)) f(y)|dPx (w) + 2||f||Px(B)
and the right hand side converges to 0 as x y (by Step 4 and the factthat f is continuous). This proves the theorem.
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
74/323
68 10. Dirichlet Problem and Brownian Motion
Remark. The theorem is local.
Theorem . Let G = {y Rd : < |y| < R}, f any continuous functionon G = {|y| = } {|y| = R}. If u is any harmonic function in G withboundary values f , then u(x) = Ex(f(XG )).
Proof. Clearly G has the exterior cone property. Thus, if
v(x) = Ex(f(XG )),
then v is harmonic in G and has boundary values f (by the previoustheorem). The result follows from the uniqueness of the solution of the
Dirichlet problem for the Laplacian operator.
The function f = 0 on |y| = R and f = 1 on |y| = is of spe-cial interest. Denote by R,0
,1the corresponding solution of the Dirichlet
problem.
Exercise. (i) Ifd = 2 then67
UR,0,1
(x) =logR log |x|logR log , x G.
(ii) Ifd 3 thenU
R,0,1
(x) =|x|n+2 Rn+2n+2 Rn+2 .
Case (i): d = 2. Then
logR log |x|logR log = U
R,0,1
(x).
Now,
Ex(f(XG )) =
|y|=
G(x, dy) = Px(|XG | = ),
i.e.
logR log |x|logR log = Px(|XG | = )
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
75/323
69
Px (the particle hits |y| = before it hits |y| = R).
Fix R and let
0; then 0 = Px
(the particle hits 0 before hitting
|y| = R).Let R take values 1, 2, 3, . . ., then 0 = Px (the particle hits 0 before
hitting any of the circles |y| = N). Recalling that
Px(lim |Xt| = ) = 1,
we get
Proposition . A two-dimensional Brownian motion does not visit apoint.
Next, keep fixed and let R , then,
1 = Px(|w(t)| = for some time t> 0).
Since any time t can be taken as the starting time for the Brownian 68
motion, we have
Proposition . Two-dimensional Brownian motion has the recurrenceproperty.
Case (ii): d 3. In this case
Px(w : w hits |y| = before it hits |y| = R)= (1/|x|n2 1/Rn2)/(1/n2 1/Rn2).
Letting R
we get
Px(w : w hits |y| = ) = (/|x|)n2
which lies strictly between 0 and 1. Fixing and letting |x| , wehave
Proposition . If the particle start at a point for away from 0 then it has
very little chance of hitting the circle |y| = .If
|x|
, then
P(w hits S) = 1 where S = {y Rd : |y| = }.
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
76/323
70 10. Dirichlet Problem and Brownian Motion
Let
V(x) = (/|x|)n2 for |x| .In view of the above result it is natural to extend V to all space
by putting V(x) = 1 for |x| . As Brownian motion has the Markovproperty
Px{w : w hits S after time t}
=
V(y)1/
(2t)d exp |y|2/2t dy 0 as t +.
Thus P(w hits S for arbitrarily large t) = 0. In other words, P(w :69limt
|w(t)| ) = 1. As this is true > 0, we get the followingimportant result.
Proposition . P(limt
|w(t)| = ) = 1,i.e. for d 3, the Brownian particle wander away to +.
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
77/323
11. Stochastic Integration
LET {Xt : t 0} BE A one-dimensional Brownian motion. We want 70first to define integrals of the type
0
f(s)dX(s) for real functions f
L1[0, ). IfX(s, w) is of bounded variation almost everywhere then wecan give a meaning to
0
f(s)dX(s, w) = g(w). However, since X(s, w)
is not bounded variation almost everywhere, g(w) is not defined in the
usual sense.
In order to define g(w) = 0
f(s)dX(s, w) proceed as follows.
Let f be a step function of the following type:
f =
ni=1
aiX[ti,ti+1), 0 t1 < t2 < . . . < tn+1.
We naturally define
g(w) =
0
f(s)dX(s, w) =
ni=1
ai(Xti+1 (w) Xti (w))
=
ni=1
ai(w(ti+1) w(ti)).
g satisfies the following properties:
(i) g is a random variable;
71
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
78/323
72 11. Stochastic Integration
(ii) E(g) = 0; E(g2) =
a2i
(ti+1 ti) = ||f||2.
This follows from the facts that (a) Xti
+1
Xti
is a normal random
variable with mean 0 and variance (ti+1 ti) and (b) Xti+1 Xti are inde-pendent increments, i.e. we have
E
0
f dX
= 0, E|
0
f dX|2 = ||f||22.
71
Exercise 1. If
f =
ni=1
aiX[ti,ti+1), 0 t1 < . . . < tn+1,
g =
mi=1
biX[si,si+1), 0 s1 < . . . < sm+1,
Show that
0
(f + g)dX(s, w) =
0
f dX(s, w) +
0
gdX(s, w)
and
0(f)dX(s, w) =
0f dX(s, w),
R.
Remark. The mapping f
0
f dX is therefore a linear L2R
-isometry of
the space S of all simple functions of the type
ni=1
aiX[ti,ti+1), (0 t1 < . . . < tn+1)
into L2(,B, P).
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
79/323
73
Exercise 2. Show that S is a dense subspace ofL2[0, ).Hint: Cc[0, ), i.e. the set of all continuous functions with compact sup-port, is dense in L
2
[0, ). Show that S contains the closure ofCc[0, ).
Remark. The mapping f
0
f dX can now be uniquely extended as
an isometry of L2[0, ) into L2(,B, P).Next we define integrals fo the type 72
g(w) =
t
0 X(s, w)dX(s, w)Put t = 1 (the general case can be dealt with similarly). It seems
natural to define
(*)
10
X(s, w)dX(s) = Ltsup |tjtj1|0
nj=1
X(j)(X(tj) X(tj1))
where 0 = t0 < t1 < . . . < tn = 1 is a partion of [0, 1] with tj1 j tj.In general the limit on the right hand side may not exist. Even if it
exists it may happen that depending on the choice of j, we may obtain
different limits. To consider an example we choose j = tj and then
j = tj1 and compute the right hand side of (). Ifj = tj1,n
j=1
Xj (Xtj Xtj1 ) =n
j=1
Xtj1 (Xtj Xtj1 )
=1
2
nj=1
(Xtj ) (Xtj1 ) 1
2
nj=1
(Xtj Xtj1 )
1
2[X2(1) X2(0)] 1
2as n , and sup |tj tj1| 0,
arguing as in the proof of the result that Brownian motion is not of
bounded variation. Ifj = tj,
Ltn
Sup |tjtj1 |0
nj=1
Xtj (Xtj Xtj1 ) = 1/2X(1) 1/2X(0) + 1/2.
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
80/323
74 11. Stochastic Integration
Thus we get different answers depending on the choice of j and73
hence one has to be very careful in defining the integral. It turns out
that the choice of j = tj1 is more appropriate in the definition of theintegral and gives better results.
Remark. The limit in () should be understood in the sense of conver-gence probability.
Exercise 3. Let 0 a < b. Show that the left integral (j = tj1) isgiven by
L
b
a
X(s)dX(s) = X2
(b) X2
(a) (b a)2
and the right integral (j = tj) is given by
R
bs
X(s)dX(s) =X2(b) X2(a) + (b a)
2.
We now take up the general theory of stochastic integration. To
motivate the definitions which follow let us consider a d-dimensional
Brownian motion {(t) : t 0}. We have
E[(t+ s) (t) A|Ft] =A
1/
(2s)e|y|2 /2sdy.
Thus
E(f((t + s) (t))|Ft] =
f(y)1/
(2s)e|y|2/2s
dy.
In particular, if f(x) = eix.u,
E[eiu((t+ s) (t))|Ft] =
eiu.y1/
(2s)e|y|2 /2sdy
= es|u|2
2 .
Thus74
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
81/323
75
E[eiu.(t+s) |Ft] = eiu.(t)es|u|2 /2,
or,
E[eiu.(t+s)+(t+s)|u|2 /2|Ft] = eiu.(t)+t|u|2 /2.Replacing iu by we get
E[e.(s)|s|2 /2 | Ft] = e.(t)t||
2/2
, s > t, .
It is clear that e.(t)t||2 /2 is Ft-measurable and a simple calculation
gives
E(e.(t)||2
t/2|) < .We thus have
Theorem . If {(t) : t 0} is a d-dimensional Brownian motion thenexp[.(t) ||2t/2] is a Martingale relative to Ft, the -algebra gener-ated by ((s) : s t).
Definition. Let (,B, P) be a probability space (Ft)t
0 and increasing
family of sub--algebras ofF with F = (t0Ft).
Let
(i) a : [0, ) [0, ) be bounded and progressively measurable;
(ii) b : [0, ) R be bounded and progressively measurable;
(iii) X : [0,
)
R be progressively measurable, right continuous
on [0, ), w , and continous on [0, ) almost everywhereon ;
(iv) Zt(w) = eX(t,w)
t0
b(s,w)ds 22
t0
a(s,w)ds
75
be a Martingale relative to (Ft)t0.
Then X(t, w) is called an Ito process corresponding to the parameters
b and a and we write Xt I[b, a].N.B. The progressive measurability of X Xt is Ft-measurable.
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
82/323
76 11. Stochastic Integration
Example. If {(t) : t 0} is a Brownian motion, then X(t, w) = t(w)is an Ito process corresponding to parameters 0 and 1. (i) and (ii) are
obvious. (iii) follows by right continuity of t and measurability of trelative to Ft and (iv) is proved in the previous theorem.
Exercise 4. Show that Zt(w) defined in (iv) is Ft-measurable and pro-
gressively measurable.
[Hint:
(i) Zt is right continuous.
(ii) Use Fubinis theorem to prove measurability].
Remark. If we put Y(t, w) = X(t, w) t
0
b(s, w)ds then Y(t, w) is pro-
gressively measurable and Y(t, w) is an Ito process corresponding to
the parameters 0, a. Thus we need only consider integrals of the typet
0 f(s, w)dY(s, w) and definet
0
f(s, w)dX(s, w) =
t0
f(s, w)dY(s, w) +
t0
f(s, w)b(s, w)ds.
(Note that formally we have dY = dX dbt).Lemma . If Y(t, w)
I[0, a], then76
Y(t, w) and Y 2(t, w) t
0
a(s, w)ds
are Martingales relative to (Ft).
Proof. To motivate the arguments which follow, we first give a formal
proof. Let
Y(t) = eY(t,w) 2
2
t0
a(s,w)ds
.
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
83/323
77
Then Y(t) is a martingale, . ThereforeY 1
is a Martingale, .
Hence (formally),
lim0
Y 1
= Y|=0is a Martingale.
Step 1. Y(t, ) Lk(,F, P), k = 0, 1, 2, . . . and t. In fact, for everyreal , Y(t) is a Martingale and hence E(Y) < . Since a is boundedthis means that
E(eY(t,)) 0.
Step 2. Let X(t) = [Y(t, ) t
0
ads]Y(t) =d
dY(t, ). 77
Define
A() =
A
(X(t, ) X(s, ))dP(w)
where t > s, A Fs. Then2
1
A()d =
21
A
[X(t, ) X(S, )]dP(w)d.
Since a is bounded, sup||
E([Y(t, )]k) < , and E(|Y|k) < , k; wecan use Fubinis theorem to get
21
A()d = A
21
[X(t, ) X(s, )]ddP(w).
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
84/323
78 11. Stochastic Integration
or
21
A()d =A
Y2 (t, ) Y1 (t, )dP(w) A
Y1 (s, ) Y1 (s, )dP(w).
Let A Fs and t > s; then, since Y is a Martingale,
21
A()d = 0.
This is true 1 < 2 and since A() is a continuous function of ,we conclude that
A() = 0, .
In particular, A() = 0 which means that
A
Y(t, )dP(w) = A
Y(s, )dP(w), A Fs, t > s,
i.e., Y(t) is a Martingale relative to (,Ft, P).78
To prove the second part we put
Z(t,
) =
d2
d2
Y(t)
and
A() =
A
{Z(t, ) Z(s, )}dP(w).
Then, by Fubini,
21
A()d = A
21
Z(t, ) Z(s, )d dP(w).
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
85/323
79
or,
21
A()d = A(2) A(1)
= 0 if A Fs, t > s.
Therefore
A() = 0, .In particular, A() = 0 implies that
Y2(t, w) t
0
a(s, w)ds
is an (,Ft, P) Martingale. This completes the proof of lemma 1.
Definition. A function : [0, ) R is called simple if there existreals s0, s1, . . . , sn, . . .
0 s0 < s1 < . . . < sn . . . < ,
sn increasing to + and
(s, w) = j(w)
if s [sj, sj+1), where j(w) is Fsj -measurable and bounded. 79Definition. Let : [0,
)
R be a simple function and Y(t, w)
I[0, a]. We define the stochastic integral ofwith respect to Y, denotedt
0
(s, w)dY(s, w)),
by
(t, w) =
t0
(s, w)dY(s, w)
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
86/323
80 11. Stochastic Integration
=
kj=1
j1(w)[Y(sj, w) Y(sj1, w)] + k(w)[Y(t, w) Y(sk, w)].
Lemma 2. Let : [0, ) R be a simple function and Y(t, w) I[0, a]. Then
(t, w) =
t0
(s, w)dY(s, w) I[0, a2].
Proof. (i) By definition, is right continuous and (t, w) is Ft-
measurable; hence it is progressively measurable. Since a is pro-
gressively measurable and bounded
a2
: [0, ) [0, )is progressively measurable and bounded.
(ii) From the definition of it is clear that (t, ) is right continuous,80continous almost everywhere and Ft-measurable therefore is
progressively measurable.
(iii) Zt(w) = e[(t,w)
2
2
t
0 a2 ds]is clearly Ft-measurable . We show that
E(Zt) < , t and E(Zt2 |Ft1 ) = Zt1 ift1 < t2.
We can assume without loss of generality that = 1 (if 1 we
replace by ). Therefore
Zt(w) = e[(t,w) t
0
a2ds]
.
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
87/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
88/323
82 11. Stochastic Integration
Therefore
E(Zt2 (w)|Ft1 ) = Zt1 (w)E(exp[k(w)[Y(t2, w) Y(t1, w)] 2
2
t2t1
a2ds)|Ft1 )
as Y I[0, a].
(*) E(exp[(Y(t2, w) T(t1, w)) 2
2
t2t1
a(s, w)ds]|Ft1 ) = 1
and since k(w) is Ft1 -measurable () remains valid if is replaced byk. Thus
E(Zt2 |Ft1 ) = Zt1 (w).The general case follows if we use the identity
E(E(X|C1)|C2) = E(X|C2) for C2 C1.
Thus Zt is a Martingale and (t, w) I[0, a2]. Corollary . (i) (t, w) is a martingale; E((t, w)) = 0;82
(ii) 2(t, w) t
0
a2ds
is a Martingale with
E(2(t, w)) = E(
t0
a2(s, w)ds.
Proof. Follows from Lemma 1.
Lemma 3. Let (s, w) be progressively measurable such that for each
t,
E(
t0
2(s, w)ds) < .
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
89/323
83
Then there exists a sequence n(s, w) of simple functions such that
limnE
t
0
|n(s, w) (s, w)|2ds = 0.Proof. We may assume that is bounded, for if N = for || Nand 0 if || > N, then n , (s, w) [0, t] . N is progressivelymeasurable and |n |2 4||2. By hypothesis L([0, t] : ).
Therefore E(t
0 |n|ds) 0, by dominated convergence. Further,we can also assume that is continuous. For, if is bounded, define
h(t, w) = 1/h
t(th)v0
(s, w)ds.
n is continuous in t and Ft-measurable and hence progressively mea-
surable. Also by Lebesgues theorem
h(t, w) (t, w), as h 0, t, w.
Since is bounded by C, h is also bounded by C. Thus 83
E(
t0
|h(s, w) (s, w)|2ds) 0.
(by dominated convergence). If is continuous, bounded and progres-
sively measurable, then
n(s, w) =
[ns]
n, w
is progressively measurable, bounded and simple. But
Ltn
n(s, w) = (s, w).
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
90/323
84 11. Stochastic Integration
Thus by dominated convergence
E
t0
|n |2ds 0 as n .
Theorem . Let(s, w) be progressively measurable, such that
E(
t0
2(s, w)ds) <
for each t > 0. Let(n) be simple approximations to as in Lemma 3.
Put
n(t, w) =
t0
n(s, w)dY(s, w)
where Y I[0, a]. Then
(i) Ltn
n(t, w) exists uniformly in probability, i.e. there exists an al-
most surely continuous (t, w) such that
Ltn
P
sup
0
t
T
|n(t, w) (t, w)|
= 0
for each > 0 and for each T. Moreover, is independent of the
sequence (0).
(ii) The map is linear.84
(iii) (t, w) and2(t, w) t
0a2ds are Martingales.
(iv) If is bounded, I[0, a2].
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
91/323
85
Proof. (i) It is easily seen that for simple functions the stochastic
integral is linear. Therefore
(n m)(t, w) =t
0
(n m)(s, w)dY(s, w).
Since n m is an almost surely continuous martingale
P
sup
0tT |n(t, w)
m(t, w)
|
1
2E[(n
m)
2(T, w)].
This is a consequence of Kolmogorov inequality (See Appendix).
Since
(n m)2 t
0
a(n m)2ds
is a Martingale, and a is bounded,
E[(n m)2(T, w)] = E
T0
(n m)2a ds .(*)
const 12
E
T
0
(n m)2ds .
ThereforeLt
n,mE[(n m)2(T, w)] = 0.
Thus (n m) is uniformly Cauchy in probability. Therefore thereexists a progressively measurable such that
Ltn
P
sup
0
t
T
|n(t, w) (t, w)|
= 0, > 0, T.
It can be shown that is almost surely continuous.
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
92/323
86 11. Stochastic Integration
If (n) and (n) are two sequences of simple functions approxi- 85
mating , then () shows that
E[(n n)2(T, w)] 0.
Thus
Ltn
n = Ltn
n,
i.e. is independent of (n).
(ii) is obvious.
(iii) (*) shows that n in L and therefore n(t, ) (t, ) in L1 foreach fixed t. Since n(t, w) is a martingale for each n, (t, w) is a
martingale.
(iv) 2n(t, w) t
0
a2n is a martingale for each n.
Since n(t, w) (t, w) in L2
for each fixed t and
2n(t, w) 2(t, w) in L1 for each fixed t.
For 2n(t, w) 2(t, w) = (n )(n + ) and using Holders in-equality, we get the result.
Similarly, since
n in L2([0, t] ),2n 2 in L1([0, t] ),
and because a is bounded a2n a2 in L1([0, t]). This showsthat 2n(t, w)
t0
a2nds converges to
2(t, w) t
0
a2ds
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
93/323
87
for each t in L1. Therefore
2(t, w) t
0
a2ds
is a martingale. 86
(v) Let be bounded. To show that I[0, 2] it is enough to showthat
e(t,w) 2
2
t0
a2ds
is a martingale for each , the other conditions being trivially sat-
isfied. Let
Zn(t, w) = en (t,w)
2
2
t0
a2nds
We can assume that |n| C if || C (see the proof of Lemma3).
Zn = exp2n(t, w) (2)
2
2
t0
a2nds + 2
t0
a2nds .
Thus
(**) E(Zn) const E
e2n(t,w) (2)
2
2
t0
a2nds
= const
since Zn is a martingale for each . A subsequence Zni converges
to
e(t,w) 2
2
t0
a2ds
almost everywhere (P). This together with (**) ensures uniform
integrability of (Zn) and therefore
e(t,w) 2
2
t0
a2ds
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
94/323
88 11. Stochastic Integration
is a martingale. Thus is an Ito process, I[0, a2].
Definition. With the hypothesis as in the above theorem we define the
stochastic integral
(t, w) =
t0
(s, w)dY(s, w).
87
Exercise. Show that d(X + Y) = dX + dY.
Remark. If is bounded, then satisfies the hypothesis of the previ-
ous theorem and so one can define the integral of with respect to Y.
Further, since itself is Ito, we can also define stochastic integrals with
respect to.
Examples. 1. Let {(t) : t 0} be a Brownian motion; then (t, w) isprogressively measurable (being continuous and Ft-measurable).
Also,
t0
2(s)ds dP =
t0
2(s)dP ds =
t0
sds =t
2
Hencet
0
(s, w)d(s, w)
is well defined.
2. Similarlyt
0
(s/2)d(s) is well defined.
3. Howevert
0
(2s)d(s)
is not well defined, the reason being that (2s) is not progressively
measurable.
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
95/323
89
Exercise 5. Show that (2s) is not progressively measurable. 88
(Hint: Try to show that (2s) is not Fs-measurable for every s. To show
this prove that Fs F2s).
Exercise 6. Show that for a Brownian motion (t), the stochastic integral
10
(s, )d(s, )
is the same as the left integral
L
10
(s, )d(s, )
defined earlier.
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
96/323
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
97/323
12. Change of Variable
Formula
WE SHALL PROVE the 89
Theorem . Let be any bounded progressively measurable function
and Y be an Ito process. If is any progressively measurable function
such that
Et
0
2ds < , t,then
(*)
t0
d(s, w) =
t0
(s, w)(s, w)dY(s, w),
where
(t, w) =
t0
(s, w)dY(s, w).
Proof.
Step 1. Let and be both simple, with bounded. By a refinement
of the partition, if necessary, we may assume that there exist reals 0 =
s0, s1, . . . , sn, . . . increasing to +
such that and are constant on
[sj, sj+1), say = j(w), = j(w), where j(w) and j(w) are Fsj -
measurable. In this case (*) is a direct consequence of the definition.
91
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
98/323
92 12. Change of Variable Formula
Step 2. Let be simple and bounded. Let (n) be a sequence of simple
bounded functions as in Lemma 3. Put
n(t, w) =
t0
n(s, w)dY(s, w)
By Step 1,
(**)
t0
dn =
t0
ndY(s, w).
90
Since is bounded, n converges to in L2([0, t] ). Hence,
by definition,t
0
ndY(s, w) converges tot
0
dY in probability.
Further,
t0
dn(s, w) = (s0, w)[n(s1, w) n(s0, w)] +
+ + (sk, w)[n(t, w) n(sk1, w)],where s0 < s1 < . . . . . . is a partition for , and n(t, w) converges to
(t, w) in probability for every t. Therefore
t0
dn(s, w)
converges in probability to
t0
d(s, w).
Taking limit as n in (**) we gett
0
d(s, w) =
t
0
dY(s, w).
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
99/323
93
Step 3. Let be any progressively measurable function with
E(
t0
2ds) < , t.
Let n be a simple approximation to as in Lemma 3. Then, by Step
2,
(***)
t0
n(s, w)d(s, w) =
t0
n(s, w)(s, w)dY(s, w).
By definition, the left side above converges to
t0
(s, w)d(s, w)
in probability. As is bounded n converges to in L2([0, t] ). 91
Therefore
P
sup0tT |t
0
ndY(s, w) t
0
dy(s, w)|
||a||1/2E
t0
(n )2ds
(see proof of the main theorem leading to the definition of the stochasticintegral). Thus
t0
ndY(s, w)
converges tot
0 dY(s, w)
in probability. Let n tend to + in (***) to conclude the proof.
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
100/323
94 12. Change of Variable Formula
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
101/323
13. Extension to
Vector-Valued Ito Processes
Definition. Let (,F, P) be a probability space and (Ft) an increasing 92
family of sub -algebras ofF. Suppose further that
(i) a : [0, ) Sd+is a probability measurable, bounded function taking values in the class
of all symmetric positive semi-definite d
d matrices, with real entries;
(ii) b : [0, ) Rd
is a progressively measurable, bounded function;
(iii) X : [0, ) Rd
is progressively measurable, right continuous for every w and continu-
ous almost everywhere (P);
Z(t, ) = exp[,X(t, ) t
0
, b(s, )ds
12
t0
, a(s, )ds](iv)
is a martingale for each Rd
, where
x,y = x1y1 + + xdyd, x, y Rd.
95
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
102/323
96 13. Extension to Vector-Valued Ito Processes
Then X is called an Ito process corresponding to the parameters b
and a, and we write X I[b, a]Note. 1. Z(t, w) is a real valued function.
2. b is progressively measurable if and only if each bi is progres-
sively measurable.
3. a is progressively measurable if and only if each ai j is so.93
Exercise 1. If X I[b, a], then show that
Xi I[bi, aii],(i)
Y =
di=1
iXi I[, b, , a],(ii)
where
= (1, . . . , d).
(Hint: (ii) (i). To prove (ii) appeal to the definition).
Remark. If X has a multivariate normal distribution with mean and
covariance (i j), then Y = ,X has also a normal distribution withmean , and variance ,. Note the analogy with the above exer-cise. This analogy explains why at times b is referred to as the mean
and a as the covariance.
Exercise 2. If {(t) : t 0} is a d-dimensional Brownian motion, then(t, w) I[0,I] where I = d d identity matrix.
As before one can show that Y(t, ) = X(t, ) t
0
b(s, w)ds is an Ito
process with parameters 0 and a.
Definition . Let X be a d-dimensional Ito process. = (1, . . . , d) a
d-dimensional progressively measurable function such that
Et
0
(s, ), (s, ) > ds
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
103/323
97
is finite or, equivalently,
Et
0
2i (s, )ds < , (i = 1, 2, . . . d).94
Then by definition
t0
(s, ), dX(s, ) =d
i=1
t0
i(s, )dXi(s, ).
Proposition . Let X be a d-dimensional Ito process X I[b, a] and let be progressively measurable and bounded. If
i(t, ) =t
0
idXi(s, ),
then = (1, . . . , d) I[B,A],
where
B = (1b1, . . . , dbd) and Ai j = ijai j.
Proof. (i) Clearly Ai j is progressively measurable and bounded.
Since a Sd+, A Sd+.
(ii) Again B is progressively measurable and bounded.
(iii) Since is bounded, each i(t, ) is an Ito process; hence is pro-gressively measurable, right continuous, continuous almost ev-
erywhere (P). It only remains to verify the martingale condition.
Step 1. Let = (1, . . . , d) Rd. By hypothesis,
E(exp[(1X1 + + dXd)|ts t
s
(1b1 + + dbd)du(*)
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
104/323
98 13. Extension to Vector-Valued Ito Processes
12
t0
ijai jds]|Fs) = 1.
Assume that each i is constant on [s, t], i = i(w) and Fs-95
measurable. Then (*) remains true ifi are replaced by ii(w) and since
is are constant over [s, t], we get
E(exp[
t0
d
i=1ii(s, )dXi(s, )
t0
ibii(s, )ds
12
t0
iji(s, )j(s, )ai jds]|s)
exp
s
0
di=1
ii(s, )dXi(s, ) s
0
,Bdu 1s
0
,Adu .
Step 2. Let each i be a simple function.
By considering finer partitions we may assume that each i is a step
function,
finest partition
i.e. there exist points s0, s1, s2, . . . , sn, s = s0 < s1 < . . . < sn+1 = t,
such that on [sj, sj+1) each i is a constant and sj -measurable. Then (**)
holds if we use the fact that ifC1
C2.
E(E(f|C1)|C2) = E(f|C2).
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
105/323
99
Step 3. Let be bounded, || C. Let ((n)) be a sequence of sim-ple functions approximating as in Lemma 3. (**) is true ifi is re-96
placed by (n)
i for each n. A simple verification shows that the expres-sion Zn(t, ), in the parenthes is on the left side of (**) with i replacedby
(n)
i, converges to
Z(t, ) =
= Exp t
0
i
ii(s, )ds t
0
i
ibii(s, )ds
12
t0
i,j
ijijai jds
as n in probability. Since Zn(t, ) is a martingale and the functionsi, j, a are all bounded,
supn
E(Zn(t, )) < .
This proves that Z(t, ) is a martingale. Corollary . With the hypothesis as in the above proposition define
Z(t) =
t0
(s, ), dX(s, ).
Then
Z(t, ) I[, b, a]where is the transpose of .
Proof. Z(t, ) = 1(t, ) + +d(t, ). Definition. Let (s, w) = (i j(s, w)) be a n d matrix of progressivelymeasurable functions with
Et
0
2i j(s, )ds < .
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
106/323
100 13. Extension to Vector-Valued Ito Processes
If X is a d-dimensional Ito process, we define 97
t
0
(s, )dX(s, )i
=
dj=1
t0
i j(s, )dXj(s, ).
Exercise 3. Let
Z(t, w) =
t0
(s, )dY(s, ),
where Y I[0, a] is a d-dimensional Ito process and is as in the abovedefinition. Show thatZ(t, ) I[0, a]
is an n-dimensional Ito process, (assume that is bounded).
Exercise 4. Verify that
E(|Z(t)|2
)=
Et
0
tr(a)ds .Exercise 5. Do exercise 3 with the assumption that a is bounded.
Exercise 6. State and prove a change of variable formula for stochastic
integrals in the case of several dimensions.
(Hint: For the proof, use the change of variable formula in the one di-
mensional case and d(X + Y) = dX + dY).
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
107/323
14. Brownian Motion as a
Gaussian Process
SO FAR WE have been considering Brownian motion as a Markov pro- 98
cess. We shall now show that Brownian motion can be considered as a
Gaussian process.
Definition. Let X (X1, . . . ,XN) be an N-dimensional random variable.It is called an N-variate normal (or Gaussian) distribution with mean
(1, . . . , N) and covariance A if the density function is1
(2)N/21
(det A)1/2exp
1
2[(X)A1(X)]
where A is an N N positive definite symmetric matrix.
Note. 1. E(Xi) = i.
2. Cov(Xi,Xj) = (A)i j.
Theorem . X (X1, . . . ,XN) is a multivariate normal distribution if andonly if for every RN, ,X is a one-dimensional Gaussian randomvariable.
We omit the proof.
Definition. A stochastic process {Xt : t I} is called a Gaussian processift1, t2, . . . , tN I, (Xt1 , . . . ,XtN) is an N-variate normal distribution.
101
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
108/323
102 14. Brownian Motion as a Gaussian Process
Exercise 1. Let {Xt : t 0} be a one dimensional Brownian motion.Then show that
(a) Xt is a Gaussian process. 99
(Hint: Use the previous theorem and the fact that increments are
independent)
(b) E(Xt) = 0, t, E(X(t)X(s)) = s t.Let : [0, 1] = [0, 1] R be defined by
(s, t) = s
t.
Define K : L2R
[0, 1] L2R
[0, 1] by
K f(s) =
10
(s, t)f(t)dt.
Theorem . K is a symmetric, compact operator. It has only a countable
number of eigenvalues and has a complete set of eigenvectors.
We omit the proof.
Exercise 2. Let be any eigenvalue ofKand f an eigenvector belonging
to . Show that
(a) f + f = 0 with f(0) = 0 = f(1).
(b) Using (a) deduce that the eigenvalues are given by n = 4/(2n +1)22 and the corresponding eigenvectors are given by
fn =
2Sin1/2[(2n + 1)t]n = 0, 1, 2, . . . .
Let Z0, Z1, . . . ,Zn . . . be identically distributed, independent, normal
random variables with mean 0 and variance 1. Then we have
Proposition . Y(t, w) =
n=0Zn(w)fn(t)nconverges in mean for every real t.
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
109/323
103
Proof. Let Ym(t, w) =m
i=0Zi(w)fi(t)
i. Therefore100
E{(Yn+m(t, ) Yn(t, ))2} =n+mn+1
f2i (t)i,
E(||Yn+m() Yn()||2 n+mn+1
i 0.
Remark. As each Yn(t, ) is a normal random variable with mean 0 andvariance
ni=0
i f2
i(t), Y(t, ) is also a normal random variable with mean
zero and variance
i=0i f
2i
. To see this one need only observe that the
limit of a sequence of normal random variables is a normal random vari-
able.
Theorem (Mercer).
(s, t) =
i=0
i fi(t)fi(s), (s, t) [0, 1] [0, 1].
The convergence is uniform.
We omit the proof.
Exercise 3. Using Mercers theorem show that {Xt : 0 t 1} is aBrownian motion, where
X(t, w) =
n=0
Zn(w)fn(t)
n.
This exercise now implies that
10
X2(s, w)ds = (L2 norm ofX)2
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
110/323
104 14. Brownian Motion as a Gaussian Process
=
nZ
2n (w),
since fn(t) are orthonormal. Therefore 101
E(e
10
X2(s,)ds) = E(e
n=0nZ
2n (w)
) =
n=0
E(enZ2n )
(by independence ofZn)
=
n=0
E(enZ20 )
as Z0, Zn . . . are identically distributed. Therefore
E(e
10
X2(s,)ds) =
n=0
1/
(1 + 2n)
=
n=01/
1 +
8 8
(2n + 1)22
= 1/
(cosh)
(2).
APPLICATION. IfF(a) = P(1
0
X2(s)ds < a), then
0
eadF(a) =
eadF(a)
= E(e1
0X2(s)ds ) = 1/
(cosh)
(2).
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
111/323
15. Equivalent For of Ito
Process
LET (,F, P) BE A probability space with (Ft)t0 and increasing fam- 102ily of sub -algebras ofF such that (UFt
t0) = F. Let
(i) a : [0, ) S+d
be a progressively measurable, bounded func-
tion taking values in S+d
, the class of all ddpositive semidefinitematrices with real entries;
(ii) b : [0, ) Rd be a bounded, progressively measurablefunction;
(iii) X : [0, ) Rd be progressively measurable, right continu-ous and continuous a.s. (s, w) [0, ) .
For (s, w)
[0,
)
define the operator
Ls,w =1
2
di,j=1
ai j(s, w)2
xixj+
dj=1
bj(s, w)
xj.
For f, u, h belonging to C0
(Rd), C0
([0, ) Rd) and C1,2b
([0, ) Rd) respectively we define Yf(t, w), Zu(t, w), Ph(t, w) as follows:
Yf(t, w) = f(X(t, w)) t
0
(Ls,w(f)(X(s, w))ds,
105
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
112/323
106 15. Equivalent For of Ito Process
Zu(t, w) = u(t,X(t, w)) t
0
u
s+ Ls,wu
(s,X(s, w))ds,
Ph(t, w) = exp[h(t,X(t, w)) t
0
h
s+ Ls,wh
(s,X(s, w)ds
12
t0
a(s, w)xh(s,X(s, w)), xh(s,X(s, w))ds].
Theorem . The following conditions are equivalent.103
(i) X(t, w) = exp[,X(t, w) t
0
, b(s, w)ds t
0
, a(s, w)ds]
is a martingale relative to (,Ft, P), Rd.(ii) X(t, w) is a martingale in Rd. In particular Xi(t, w) is a mar-
tingale Rd.(iii) Yf(t, w) is a martingale for every f
C
0
(Rd)
(iv) Zu(t, w) is a martingale for every u C0 ([0, ) Rd).
(v) Ph(t, w) is a martingale for every h C1,2b [(0, ) Rd).(vi) The result (v) is true for functions h C1,2([0, ) Rd) with
linear growth, i.e. there exist constants A and B such that |h(x)| A|x| + B.
The functions ht
, hxi
, and 2hxixj
which occur under the integral
sign in the exponent also grow linearly.
Remark. The above theorem enables one to replace the martingale con-
dition in the definition of an Ito process by any of the six equivalent
conditions given above.
Proof. (i) (ii). X(t,
) is Ft-measurable because it is progressively mea-
surable. That E(|X(t, w)|) < is a consequence of (i) and the fact thata is bounded.
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
113/323
107
The function X(t, w)
X(s, w)is continuous for fixed t, s, w, (t > s).
Moreras theorem shows that is analytic. Let AFs. Then
A
X(t, w)
X(s, w)dP(w)
is analytic. By hypothesis, 104A
X(t, w)
X(s, w)dP(w) = 1, Rd.
Thus
A
X(t, w)
X(s, w)dP(w) = 1, complex . Therefore
E(X(t, w)|Fs) = X(s, w),
proving (ii). (ii) (iii). Let
A(t, w) = exp it
0
, b(s, w)ds + 12
t
0
, a(s, w)ds , Rd.By definition, A is progressively measurable and continuous. Also
|dAdt
(t, w)| is bounded on every compact set in R and the bound is in-dependent of w. Therefore A(t, w) is of bounded variation on every in-
terval [0, T] with the variation ||A||[0,T] bounded uniformly in w. LetM(t, w) = Xi(t, w). Therefore
sup0tT
|M(t, w)| e1/2 T sup0tT
|, a|.
By (ii) M(t, ) is a martingale and since
E
sup
0tT|M(t, w)| ||A||[0,T](w)
< , T,
M(t, )A(t, ) 12
t0
M(s, )dA(s, )
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
114/323
108 15. Equivalent For of Ito Process
is a martingale (for a proof see Appendix), i.e. Yf(t, w) is a martingale
when f(x) = ei,x.
Let f C0 (Rd
). Then f F(Rd
) the Schwartz-space. Thereforeby the Fourier inversion theorem
f(x) =
Rd
f()ei,xd.
105
On simplification we get
Yf(t, w) =Rd
f()Y(t, w)d
where Y Yei, x. Clearly Yf(t, ) is progressively measurable andhence Ft-measurable.
Using the fact that
E(|Y(t, w)|) 1 + t d|| ||b|| +d2
2 ||2 ||a||,
the fact that F(Rd) L1(Rd) and that F(Rd) is closed under multi-plication by polynomials, we get E(|Yf(t, w)|) < . An application ofFubinis theorem gives E(Yf(t, w)|Fs) = Yf(s, w), ift > s. This proves(iii).
(iii) (iv). Let u C0([0, ) Rd).Clearly Z
u(t,
) is progressively measurable. Since Z
u(t, w) is boun-
ded for every w, E(|Zu(t, w)|) < . Let t > s. Then
E(Zu(t, w) Zu(s, w)|Fs) =
= E(u(t,X(t, w) u(s,X(s, w)|Fs) E(t
s
(u
+ L,wu)(,X(, w)d|Fs)
= E(u(t,X(t, w) u(t,X(s, w))|Fs) + E(u(t,X(s, w) u(s,X(s, w))|Fs)
E(t
s
( u
+ Luw)(,X(, w))d|Fs)
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
115/323
109
= E(
ts
(L,wu)(t,X(, w))d|Fs)+
+ E(
ts
(u
(,X(s, w))d|Fs)
E(t
s
(u
+ Lu, w)(,X(, w))d|Fs), by (iii)
= E(
t
s
[L,w
u(t,X(, w))
L,w
u(,X(, w))]d|F
s)
+ E(
ts
[u
(,X(s, w)) u
(,X(, w))]d|Fs)
= E(
ts
(L,wu(t,X(, w)) L,wu(,X(, w))]d|Fs)
E(t
s
d
s
L,wu
(,X(, w))d|Fs)
106
The last step follows from (iii) (the fact that > s gives a minus
sign).
=
E(
t
0
d
t
L,wu(,X(, w))d|F
s)
E(t
s
d
s
L,wu
(,X(, w))d|Fs)
= 0
(by Fubini). Therefore Zu(t, w) is a martingale.
Before proving (iv) (v) we show that (iv) is true ifu C1,2b ([0, ) Rd. Let u C1,2
b.
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
116/323
110 15. Equivalent For of Ito Process
(*) Assume that there exists a sequence (un) C0 [[0, )Rd] suchthat
un u, unt
ut
, unxi
uxi
, unxixj
2
uxixj
uniformly on compact sets.107
Then Zun Zu pointwise and supn
(|Zun (t, w)|) < .Therefore Zu is a martingale. Hence it is enough to justify (*).
For every u C1,2b
([0, ) Rd) we construct a u C1,2b
((, ) Rd) C1,2
b(R Rd) as follows. Put
u(t, x) =
u(t, x), ift 0,C1u(t, x) + C2u( t2 , x), ift < 0;matching
u
t,
u
tat t = 0 and u(t, x) and u(t, x) at t = 0 and u(t, x) and
u(t, x) at t = 0 yields the desired constants C1 and C2. In fact C1 = 3,C2 = 4. (*) will be proved if we obtain an approximating sequence for
u. Let S : R be any C function such that if |x| 1,
S(x) =
1, if |x| 1,0, if |x| 2.Let Sn(x) = S
|x|2n
where |x|2 = x2
1+ + x2
d+1. Pur un = Snu.
This satisfies (*).
(iv) (v). Leth C1,2
b([0, ) Rd).
Put u = exp(h(t, x)) in (iv) to conclude that
M(t, w) = eh(t,X(t,w)) t
0
eh(s,X(s,w))
h
s+Ls,wh +
1
2xh, axhds
is a martingale.Put108
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
117/323
111
A(t, w) = exp
t0
h
s(s, w) + Ls,w (s, w) + 1
2a(s, w)xh, xhds
.
A((t, w)) is progressively measurable, continuous everywhere and
||A||[0,T](w) C1 C2T
where C1 and C2 are constants. This follows from the fact that |dA
dt| is
uniformly bounded in w. Also sup0tT
|M(t, w)| is uniformly bounded in
w. ThereforeE( sup
0tT|M(t, w)| ||A||[0,T](w)) < .
Hence M(t, )A t
0
M(s, )dA(s, ) is a martingale. Now
dA(s, w)
A(s, w)=
h
s(s, w) + Ls,wh(s, w) +
1
2axh, xh
Therefore
M(t, w) = eh(t,X(t,w)) +
t0
eh(s,X(s,w))dA(s, w)
A(s, w).
M(t, w)A(t, w) = Ph(t, w) + A(t, w)
t0
eh(s,X(s,w))dA(s, w)
A(s, w)
t0
M(s, )dA(s, ) =t
0
eh(s,X(s,w))dA(s, w)
+
t0
dA(s, w)
s0
eh(,X(,w))dA(, w)
A(, w)
Use Fubinis theorem to evaluate the second integral on the right
above and conclude that Ph(t, w) is a martingale.
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
118/323
112 15. Equivalent For of Ito Process
(vi) (i) is clear if we take h(t, x) = , x. It only remains to provethat (v) (vi).
(v) (vi). The technique used to prove this is an important one andwe shall have occasion to use it again.
Step 1. 0 Let h(t, x) = 1x1 + 2x2 + dxd = , x for every (t, x) 109[0, ) Rd, is some fixed element ofRd. Let
Z(t) = exp
,Xt
t0
, bds 12
t0
, ads
We claim that Z(t, ) is a supermartingale.
Let f : R R be a C function with compact support such thatf(x) = x in |x| 1/2 and |f(x)| 1, x. Put fn(x) = n f(x/n). Therefore|fn(x)| C|x| for some C independent ofn and x and fn(x) converges tox.
Let hn(x) =d
i=1 ifn(xi). Then hn(x) converges to , x and |hn(x)| C|x| where C is also independent ofn and x. By (v),Zn(t) = exp
hn(t,Xt) t
0
hn
s+ Ls,wh
ds 1
2
t0
axhn, xhnds
is a martingale. As hn(x) converges to , x, Zn(t, ) converges to Z(t, )pointwise. Consequently
E(Z(t)) = E(limZn(t)) lim E(Zn(t)) = 1
and Z(t) is a supermartingale.
Step 2. E(exp B sup0st
|X(s, w)|) < for each t and B. For, let Y(w) =sup
0st|X(s, w)|, Yi(w) = sup
0st|Xi(s, w)| where X = (X1, . . . ,Xd). Clearly
Y
Y1 +
+ Yd. Therefore
E(eBY) E(eBY1eBY2 . . . eBYd).
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
119/323
113
The right hand side above is finite provided E(eBYi) < for each i110as can be seen by the generalised Holders inequality. Thus to prove the
assertion it is enough to show E(eBY
i) < for each i = 1, 2, . . . d withaB different from B; more specifically for B bounded.
Put 2 = 0 = 3 = . . . = d in Step 1 to get
u(t) = exp[1X1(t) t
0
1b1(s, )ds 1
221
t0
a11(s, )ds]
is a supermartingale. Therefore
P
sup0st
u(s, )
1
E(u(t)) =1
, > 0.
(Refer section on Martingales). Let c be a common bound for both b1and a11 and let 1 > 0. Then () reads
P
sup
0
s
t
exp 1X1(s) exp(1ct+1
221ct)
1
.
Replacing by
e1 ect11/2ct21
we get
P
sup
0stexp 1X1(s) exp 1
e1 +1ct+1/221 ct,
i.e.
P
sup0st
X1(s) e1 +1ct+1/221 ct, 1 > 0.Similarly
P
sup
0stX1(s)
e1 +1ct+1/221 tc, 1 > 0.
As
{Y1(w) } sup0st
X1(s) sup0st
X1(s) ,
-
7/27/2019 S R S Varadhan Tata Institute of Fundamental Research Lectures on Diffusion Problems and Partial Differential Equ
120/323
114 15. Equivalent For of Ito Process
we get 111
P{Y1 } 2e1 +1ct+1/221
ct, 1 > 0.Now we get
E(exp BY1) =1
B
0
exp(Bx)P(Y1 x)dx (since Y1 0)
2B
0
exp(Bx x1 + 1ct+1
221ct)dx
< , if B < 1This completes the proof of step 2.
Step 3. Z(t, w) is a martingale. For
|Zn(t, w)| = Zn(t, w)
= exp hn(Xt) t
0 hn
s+ Ls,whn dx
1
2
t
0 axhn, xhnds exp
hn(Xt) t
0
Ls,whn