-
Stochastic Processes
Richard B. Sowers
Department of Mathematics, University of Illinois at Urbana–Champaign, Urbana, IL61801
E-mail address: [email protected] NOT CIRCULATE
December 8, 2012
-
Contents
Chapter 1. Brownian Motion 3Exercises 6
Chapter 2. Martingales 7Exercises 10
Chapter 3. Stochastic Integration 11Exercises 13
Chapter 4. SDE’s 15Exercises 17
Chapter 5. Ito’s formula 19Exercises 21
Chapter 6. Feynman-Kac 23Exercises 24
Chapter 7. CIR Processes 25Exercises 27
Chapter 8. Girsanov’s theorem 29Exercises 30
Chapter 9. The martingale problem and Levy’s characterization of Brownian motion 311. Poisson processes 31Exercises 33
Chapter 10. Martingales as time-changed Brownian motions 35Exercises 35
Chapter 11. Filtering 37Exercises 39
Chapter 12. Option Pricing 411. Bonds 42Exercises 44
Chapter 13. Feller Conditions 45Exercises 49
Chapter 14. Wong-Zakai 51
Chapter 15. Euler-Maruyama approximations 55
Chapter 16. Large Deviations 57Exercises 60
3
-
Chapter 17. Malliavin Calculus 63Exercises 65
Chapter 18. Stochastic Averaging 67
4
-
CHAPTER 1
Brownian Motion
Let’s start by constructing Brownian motion. Let’s fix a time horizon T > 0, and construct Brownianmotion on [0, T ]. Let {φn}n∈N be an orthonormal basis of L2[0, T ]; e.g.
φ0(t) =1√T
φ2n(t) =
√2
Tcos
(2πnt
T
)φ2n+1(t) =
√2
Tsin
(2πnt
T
)Let’s also fix an i.i.d. collection {ξn}Nn=0 of i.i.d. N(0, 1) random variables. Define
WNtdef=
N∑n=0
ξn
∫ ts=0
φn(s)ds =
N∑n=0
ξn〈χ[0,t], φn
〉L2.
for each t ∈ [0, T ] and n ∈ {0, 1, . . . }. Since WNt is a linear combination of random variables, WNt ∈ L2[0, T ]
Lemma 0.1. For each t ∈ [0, T ], W∞tdef= limN→∞W
Nt exists as a limit in L
2(Ω).
Proof. For integers N1 and N2 such that N1 ≤ N2,
(1) E[|WN2t −W
N1t |2
]=
N2∑n=N1+1
〈χ[0,t], φn
〉2L2
;
the claim follows from Bessel’s inequality. �
Let’s next show that W has independent Gaussian increments.
Lemma 0.2. Fix 0 = t0 ≤ t0 < t1 ≤ tJ = T , {W∞t1 −W∞t0 ,W
∞t2 −W
∞t1 . . .W
∞tJ −W
∞tJ−1} are independent
random variables and W∞t −W∞s is N(0, t− s).
Proof. Fix {θj}Jj=1 ⊂ R. It suffices to show that
(2) E
exp J∑j=1
√−1θj(W∞tj −W
∞tj−1)
= exp−1
2
n∑j=1
θ2j (tj − tj−1)
.To see this, note that
J∑j=1
√−1θj(W∞tj −W
∞tj−1) = limN→∞
n∑j=1
√−1θj(WNtj −W
Ntj−1) =
√−1
N∑n=0
ξn 〈f, φn〉L2 ,
this limit being in L2, where
f =
J∑j=1
θj(χ[0,tj ] − χ[0,tj−1]
)=
n∑j=1
θjχ(tj−1,tj ].
5
-
Thus
(3)
E
exp√−1 J∑
j=1
√−1θj(W∞tj −W
∞tj−1)
= limN→∞
E
[exp
[√−1
N∑n=0
ξn 〈f, φn〉L2
]]
= limN→∞
exp
[−
N∑n=0
〈f, φn〉2L2
]= exp
[−1
2‖f‖2L2
]= exp
−12
J∑j=1
θ2j (tj − tj−1)
�
Let’s also understand continuity.
Lemma 0.3. There is a process {Wt; 0 ≤ t ≤ T} which is P-a.s. continuous such that P{Wt = W∞t } = 1for all t ∈ [0, T ]. In fact, for each γ ∈ (0, 1/2), there is a P-a.s. finite random variable Ξγ such that
|Wt(ω)−Ws(ω)| ≤ Ξγ(ω)|t− s|γ
for all s and t in [0, T ] and all ω ∈ Ω.
Proof. Let’s begin by showing that W∞ can’t vary too much over adjacent dyadic rationals. Fix
(4) p >1
12 − γ
(thus p > 2) and note that
E[|Wt −Ws|p] = Kp|t− s|p/2
for all s and t in [0, T ], where
Kpdef=
∫z∈R|z|p e
−z2/2√
2πdz.
For each n ∈ N, define
Andef=
{max
0≤j≤2n−1|W∞(j+1)/2n −W
∞j/2n | ≥
1
2nγ
}.
Note that
P(An) ≤2n−1∑j=0
P{∣∣∣W∞(j+1)/2n −W∞j/2n ∣∣∣ ≥ 12nγ
}≤
2n−1∑j=0
2nγpE[∣∣∣W∞(j+1)/2n −W∞j/2n ∣∣∣p]
≤ Kp2n(1+γp)(
1
2n/2
)p≤ Kp
2n(p(1/2−γ)−1).
By our choice (4) of p, we have that p( 12 − γ)− 1 > 0, so∞∑n=1
P(An)
-
is also finite for each ω ∈ Ω◦,
Ξ◦γ(ω)def= sup
n∈N2nγ max
0≤j≤2n−1
∣∣∣W∞(j+1)/2n −W∞j/2n∣∣∣is finite; i.e.,
max0≤j≤2n−1
∣∣∣W∞(j+1)/2n −W∞j/2n ∣∣∣ ≤ Ξ◦γ(ω)2nγfor all ω ∈ Ω◦.
For each t ∈ [0, T ] and n ∈ N, define τn(t)def= bt2nc/2n.
Fix t ∈ [0, T ]. We first want to compare τn(t) and τn+1(t) (both of which can be written as dyadicrationals of level n). Assume that τn(t) = k/2
n; i.e.,
k
2n≤ t < k + 1
2n.
If t < (k + 12 )/2n, then
2k
2n+1≤ t < 2k + 1
2n+1
so τn+1(t) = 2k/2n+1. Thus
W∞τn(t) −W∞τn+1(t)
= W∞(2k)/2n+1 −W∞(2k)/2n+1 = 0.
If t ≥ (k + 12 )/2n, then
2k + 1
2n+1≤ t < 2k + 2
2n+1
and τn+1(t) = (2k + 1)/2n+1. In this case∣∣∣W∞τn(t) −W∞τn+1(t)∣∣∣ = ∣∣∣W∞(2k)/2n+1 −W∞(2k+1)/2n+1 ∣∣∣ ≤ Ξ◦γ(ω)2(n+1)γ .
Thus for any n1 and n2 in N with n1 < n2,
(5) |W∞τn2 (t) −W∞τn1 (t)
| ≤n2−1∑n′=n1
∣∣∣W∞τn+1(t) −W∞τn(t)∣∣∣ ≤ n2−1∑n′=n1
Ξ◦(ω)
2(n′+1)γ.
Thuslim
n1,n2→∞|W∞τn1 (t) −W
∞τn2 (t)
| = 0
and since R is complete, limn→∞W∞τn(t) is well-defined for each ω ∈ Ω◦. Let’s then define
Wtdef=
{limn→∞W
∞τn(t)
(ω) if ω ∈ Ω◦0 ω ∈ A.
We now claim that W is continuous. Clearly W is continuous on A; we really only need consider ω ∈ Ω◦.Fix s and t in [0, T ] and assume that t− s > 0. Let N ∈ N be such that
1
2N+1≤ t− s < 1
2N.
Assume that τN (s) = k/2N ; i.e.,
k
2N≤ s < k + 1
2N.
If t < (k + 1)/2n, thenk
2n≤ s ≤ t < k + 1
2n
and τn(t) = k/2n. In this case
W∞τn(t) −W∞τn(s)
= W∞k/2n −W∞k/2n = 0.
If t ≥ (k + 1)/2n, thenk + 1
2n≤ t ≤ s+ 1
2N≤ k + 2
2N.
7
-
and τn(t) = (k + 1)/2n; we here get that∣∣∣W∞τn(t) −W∞τn(s)∣∣∣ = ∣∣∣W∞(k+1)/2n −W∞(k+1)/2n ∣∣∣ ≤ Ξ◦γ(ω)2nγ .
Thus for any n ≥ N , we have that|W∞τn(t) −W
∞τn(s)| ≤ |W∞τn(t) −W
∞τN (t)|+ |W∞τN (t) −W
∞τN (s)
|+ |W∞τn(s) −W∞τN (s)
|
≤ 2n∑
n′=N
Ξ◦γ(ω)
2γn′+
Ξ◦γ(ω)
2Nγ≤ 2Ξ◦γ(ω)
∞∑n′=N
(1
2γ
)n′+ Ξ◦γ(ω)
(1
2N
)γ≤ Ξ◦γ(ω)
{2
1− 2−γ+ 1
}(1
2N
)γ≤
Ξ◦γ(ω)
2γ
{2
1− 2−γ+ 1
}(1
2N−1
)γLetting n↗∞, we get that
|Wt −Ws| ≤Ξ◦γ(ω)
2γ
{2
1− 2−γ+ 1
}|t− s|γ
Finally, note that
E[|Wt −W∞t | ∧ 1] = limn→∞
E[|W∞τn(t) −W∞t | ∧ 1] ≤ lim
n→∞E[|W∞τn(t) −W
∞t |]
≤ limn→∞
√E[|W∞τn(t) −W
∞t |2] = lim
n→∞
√τn(t)− t = 0.
�
This calculation is essentially due to Kolmogorov.
Exercises(1) Prove (1).(2) In the notation of (3), prove that
E
[exp
[√−1
N∑n=0
ξn 〈f, φn〉L2
]]= exp
[−1
2
N∑n=0
〈f, φn〉2L2
](3) Note that WN is differentiable. Define
WN (θ) def=∫ Tt=0
ẆN (t) exp[√−1θt
]dt
for all θ ∈ R and N ∈ N. ComputelimN→∞
E[∣∣WN (θ)∣∣2]
for all θ ∈ R.
8
-
CHAPTER 2
Martingales
Let W be a standard Brownian motion. Define
(6) Wtdef=⋂s>t
σ{Wr; r ≤ s}.
Then {Wt}t≥0 is a right-continuous filtration and that W is adapted to {Wt}t≥0. Fix 0 ≤ s < t and A ∈ Wt.For any s′ ∈ (s, t), A ∈ σ{Wr; r ≤ s′}, so
E[WtχA] = E[(Wt −Ws′)χA] + E[Ws′χA] = E[Ws′χA]
since Wt −Ws′ is independent of A. Since lims′↘t E[|Ws′ −Ws|] = 0,
E[WtχA] = E[WsχA],
so E[Wt|Ws] = Ws. Thus W is a continuous square integrable martingale with respect to a right-continuousfiltration.
Now that we have one continuous square-integrable martingale with a right-continuous filtration, let’sgeneralize. Let M be a square-integrable continuous process and let {Ft}t≥0 be a right-continuous filtrationand suppose that M is a martingale with respect to {Ft}t>0.
Let’s next understand quadratic variation. Fix 0 ≤ s < t and s′ ∈ (s, t]. For any A ∈ σ{Wr; r ≤ s′},
E[(W 2t − t)χA] = E[{
(Wt −Ws′ +Ws′)2 − t}χA]
= E[{
(Wt −Ws′)2 + 2(Wt −Ws′)Ws′ +W 2s′ − t}χA]
= E[{t− s′ +W 2s′ − t
}χA]
= E[(W 2s′ − s′)χA]
where we have again used the fact that Wt −Ws′ is independent of A. Since lims′↘s E[|Ws′ −Ws|2] = 0,
E[(W 2t − t)χA] = E[(W 2s − s)χA]
so in fact E[W 2t − t|Ws] = W 2s − s; i.e., W 2t − t is a martingale. We also note that the process {t; t ≥ 0} iscontinuous, adapted, and non-decreasing, and that it vanishes at 0.
Definition 0.4. Let M be a square-integrable martingale with respect to a filtration {Ft}t>0. Acontinuous and adapted non-decreasing process 〈M〉 for which 〈M〉0 = 0 is called the quadratic variation ofM if {M2t − 〈M〉t ; t > 0} is a martingale (with respect to {Ft}t>0).
In fact, we could be more general (and require slightly less than continuity of 〈M〉), but at the cost of anumber of technicalities. See [?]. It also turns out that 〈M〉 must be unique. Note that if M0 = 0, then
(7) E[M2t ] = E[M2t − 〈M〉t] + E[〈M〉t] = E[M20 − 〈M〉0] + E[〈M〉t] = E[〈M〉t]
(which in the case of Brownian motion, recovers the fact that E[W 2t ] = t).Assume now that M is a continuous square-integrable martingale with respect to a right-continuous
filtration {Ft}t≥0. A number of calculations work only if we assume that M is bounded. Let’s understandthat we can localize to achieve this. Fix L > 0 and define
τdef= inf {t ≥ 0 : |Mt| > L or 〈M〉t > L} .
Then τ is a stopping time. Define M̃tdef= Mτ∧t for all t ≥ 0. We claim that M̃ is also a continuous martingale
with respect to {Ft}t≥0 (and of course it is bounded). We also claim that〈M̃〉t
= 〈M〉τ∧t. Clearly M̃ is
square integrable. Secondly, define τNdef= dτNeN . Then the τN ’s are stopping times and τN ↘ τ .
9
-
Assume for a moment that X is a continuous process which is adapted to {Ft}t≥0. For any t ≥ 0 andA ∈ B(R),
{XτN∧t ∈ A} = ({τN > t} ∩ {Xt ∈ A}) ∪ ({τN ≤ t} ∩ {XτN ∈ A}) .Since τN is a stopping time and X is adapted, {τN > t} and {Xt ∈ A} are both in Ft. Secondly,
{τN ≤ t} ∩ {XτN ∈ A} =⋃n∈Nn/N≤t
{τN =
n
N
}∩ {Xn/N ∈ A}.
If n/N ≤ t, then {τN = n/N} and {Xn/N ∈ A} are both in Fn/N ⊂ Ft. Collecting things together, weget that XτN∧t is Ft-measurable. Since Xτ∧t = limN→∞XτN∧t, Xτ∧t is the limit of Ft-measurable randomvariables; thus Xτ∧t is also Ft-measurable. Hence both {MτN∧t; t ≥ 0} and {〈M〉τN∧t ; t ≥ 0} are adaptedto {Ft}t≥0. Let’s now moreover assume that X is a square-integrable martingale with respect to {Ft}t≥0.Fix 0 ≤ s ≤ t and A ∈ Fs. Then
(8)
E[XτN∧tχA] = E[XτN∧tχA∩{τN>s}
]+ E
[XτN∧tχA∩{τN≤s}
]= E
[X(τN∧t)∨sχA∩{τN>s}
]+ E
[XτNχA∩{τN≤s}
]= E
[XsχA∩{τN>s}
]+ E
[XτNχA∩{τN≤s}
]= E [XτN∧sχA]
Note that (τN ∧ t)∨ s and s are both finite-valued stopping times and (τN ∧ t)∨ s ≥ s; we have used optionalsampling in (8). We now let N →∞. From Doob’s inequality, we have that
E[
sup0≤s≤t
X2s
]≤ 4E[X2t ] 0 such that
sup0≤s≤t
|Ms| ≤ K and sup0≤s≤t
| 〈M〉s | ≤ K.
10
-
We first claim that
(13) E[(V (N)
)2]≤ 4K4 + 8K3.
Indeed,
E[(V (N)
)2]= 2
∑0≤n
-
We have used here (9) to see that E[D(N)n |Fn] = 0. Note next that
|D(N)n |2 ={(
Ms(N)n+1−M
s(N)n
)2−(〈M〉
s(N)n+1− 〈M〉
s(N)n
)}2≤ 2
(Ms(N)n+1−M
s(N)n
)4+ 2
(〈M〉
s(N)n+1− 〈M〉
s(N)n
)2≤ 2|ξaN |2
(Ms(N)n+1−M
s(N)n
)2+ 2|ξbN |
{〈M〉
s(N)n+1− 〈M〉
s(N)n
}where
(16)
ξaNdef= max
0≤n≤N−1|M
s(N)n+1−M
s(N)n|
ξbNdef= max
0≤n≤N−1
{∣∣∣〈M〉s(N)n+1 − 〈M〉s(N)n ∣∣∣}Note that |ξaN | ≤ 2K and |ξbN | ≤ 2K and that by continuity, limN→∞ ξaN = 0 and limN→∞ ξbN = 0 P-a.s.Thus
E
[N−1∑n=0
|D(N)n |2]≤ 2E
∣∣∣|ξaN |2V (N)2 ]+ 2E [|ξbN |2 {〈M〉t − 〈M〉0}]≤ 2
{E[|ξbN |4
]E[(V (N)
)2]}1/2+ 4KE[|ξaN |2].
Thus
(17) limN→∞
E
[N−1∑n=0
|D(N)n |2]
= 0;
Using this and combining things together, we indeed get (14).
Exercises(1) Assume that W is a Brownian motion.
(a) Show that lims→t E[|Ws −Wt|] = 0 for all t ≥ 0.(b) Show that lims→t E[WtχA] = E[WsχA] for all t ≥ 0 and A ∈ F .
(2) One reason we are interested in right-continuous filtrations is as follows. Suppose that X is a continuousstochastic process which is adapted to a right-continuous filtration {Ft}t≥0 and which takes values in atopological space S. Fixing an open subset O of S, define τ
def= inf {t ≥ 0 : Xt ∈ O}. Show that
{τ ≤ t} =⋂s>ts∈Q
⋃r
-
CHAPTER 3
Stochastic Integration
We want to understand a notion of stochastic integration which focusses on martingale transforms, bywhich we can make new martingales out of old ones.
Let’s first define a functional space. If ξ is a continuous process on an interval containing [0, T ], define
‖ξ‖H(T )def=
√E[
sup0≤t≤T
|ξs|2].
Let HT be the collection of continuous and adapted (with respect to {Ft}t≥0) processes on [0, T ] for which‖f‖H(T )
-
Remark 0.5. What we are about to do is a bit subtle. Let’s consider a toy problem which will providesome exemplary geometry.
The set Q× {0} is a subset of R2 (obviously). For (q, 0) ∈ Q× {0}, define
T (q, 0) def= (2q, 3q, q) ∈ R3.Thus T : Q× {0} → R3. Of course T is linear. It is also continuous;
(20) ‖T (q1, 0)− T (q2, 0)‖R3 ≤√
14‖(q1, 0)− (q2, 0)‖R2for all (q1, 0) and (q2, 0) in Q× {0} (we are using the standard norms on R2and R3 here).
We should be able to “extend” T ; we should be able to (naturally and uniquely) extend T to all ofR× {0} (which is the closure of Q× {0}); i.e., the map
T e(x, 0) def= (2x, 3x, x) (x, 0) ∈ R× {0}is extension of T .
The procedure is as follows. Fix (x, 0) ∈ R × {0}. Let {(qn, 0)}n∈N be sequence of points in Q × {0}which converges to (x, 0). Then
limn,n′→∞
‖(qn, 0)− (qn′ , 0)‖R2 ≤ limn,n′→∞
{‖(qn, 0)− (x, 0)‖R2 + ‖(qn′ , 0)− (x, 0)‖R2} = 0.
Thus by (20),
limn,n′→∞
‖T (qn, 0)− T (qn′ , 0)‖R3 ≤√
14 limn,n′→∞
‖(qn, 0)− (qn′ , 0)‖R2 = 0.
Since R3 is complete, there is a point (which we shall label as T e(x, 0)) in R3 such thatlimn→∞
‖T (qn, 0)− T e(x, 0)‖R3 = 0.
In fact (see the problems) T e is unique and linear and T e = T on Q× {0}.Note that we simply have no idea how to define T (2, 5); i.e., how to extend T off of R× {0}.The general geometry is that we have a subset X′ which is a subset of a Banach space X. We have a
continuous map T from X′ into another Banach space Y. We can then uniquely extend T to X′, the closureof X′ in X.
The Ito isometry allows us to define the Ito integral. Fix T > 0. For a measurable stochastic process f ,define
‖f‖Xdef=
√√√√E[∫ Ts=0
f2s d 〈M〉s
].
Let X be the collection of measurable processes for which ‖f‖X < ∞; then X is a Banach space with norm‖ · ‖X. Let X′ be the collection of simple predictable functions; clearly X′ ⊂ X. Define a new stochasticprocess by stochastic integration; define
Tt(f)def=
∫ ts=0
fsdMs.
Then T : X′ → H(T ) and by (19) we in fact have that‖T (f)‖H(T ) ≤ 2‖f‖X.
Define Pdef= X′; we refer to P as the collection of predictable functions; we can then uniquely extend T to
a linear map on X′. In other words, if f ∈ P, then there is a collection {f (n)}n∈N of predictable simplefunctions such that
limn→∞
E
[∫ Ts=0
|fns − f |2d 〈M〉s
]= 0,
and there is a T (f) ∈ H(T ) such that
limn→∞
E
[sup
0≤t≤T
∣∣∣∣Tt(f)− ∫ ts=0
f (n)s dMs
∣∣∣∣2]
= 0.
14
-
For 0 ≤ s ≤ t ≤ T and A ∈ Fs,
E[Tt(f)χA] = limn→∞
E[∫ t
r=0
f (n)r dMtχA
]= limn→∞
E[∫ s
r=0
f (n)r dMrχA
]= E[Ts(f)χA]
E[{
(Tt(f))2 −∫ tr=0
f2r d 〈M〉r
}χA
]= limn→∞
E
[{(∫ tr=0
f (n)r dMr
)2−∫ tr=0
(f (n)r )2d 〈M〉r
}χA
]
= limn→∞
E
[{(∫ sr=0
f (n)r dMr
)2−∫ sr=0
(f (n)r )2d 〈M〉r
}χA
]
= E[{
(Ts(f))2 −∫ sr=0
f2r d 〈M〉r
}χA
],
Thus T (f) is a square-integrable continuous martingale with quadratic variation process∫ ts=0
f2s d 〈M〉s
Exercises(1) Fix 0 ≤ t0 < t1. As usual,
χ(t0,t1](r) =
{1 if s ∈ (t0, t1]0 else.
Show that ∫ ts=0
χ(t0,t1](s)ds = (t1 ∧ t)− (t0 ∧ t).
(2) Fix 0 ≤ t1 < t2. Let M be an integrable {F}t≥0-adapted continuous stochastic process. Suppose that(21) E[Mt|Fs] = Ms
if either• t1 ≤ s < t ≤ t2• 0 ≤ s < t ≤ t1.
Show that (21) then holds for all 0 ≤ s ≤ t ≤ t2.(3) Fix 0 ≤ t1 ≤ t2 and a bounded Ft1-measurable random variable ξ. Define
M̃tdef= ξ(Mt2∧t −Mt1∧t).
(a) Show that M̃ is a martingale with respect to {Ft}t≥0 (remember to show that it is adapted).(b) Show that
〈M̃〉t
= ξ2t1(〈M〉t2∧t − 〈M〉t1∧t
).
(4) Fix 0 ≤ t1 < t2 ≤ t3 < t4 and two bounded random variables ξA and ξB , where ξA is Ft1-measurableand ξB is Ft3-measurable. Define
M̃Atdef= ξA(Mt2∧t −Mt1∧t) and M̃Bt
def= ξB(Mt4∧t −Mt3∧t).
Show that M̃AM̃B (i.e, the product of the two processes) is a martingale.
(5) Back to (6). Defining Wt+def= ∩s>tWs for all t ≥ 0, Show that Wt+ = Wt for all t ≥ 0; i.e., {Wt}t≥0 is
right-continuous.(6) Show that if f is adapted, bounded, and continuous, it is predictable.
(7) Suppose that τ is a stopping time. Again define τNdef= dτNe/N .
(a) For each k ∈ N and T > 0, show thatχ{τN=k/N}χ(k/N,T ]
is a simple predictable function.(b) Show that χ(τN ,T ] is a simple predictable function.(c) Show that χ[0,τN∧T ] is a simple predictable function.(d) Show that χ[0,τ∧T ] is a predictable function.
15
-
(e) Show that χ[0,τ ] is a predictable function.(8) In Remark 0.5,
(a) Show that T e(x, 0) is uniquely defined; if there are two sequences {(qn, 0)}n∈N and {(q′n, 0)}n∈Nconverging to (x, 0) in R2, then limn→∞ T (qn, 0) = limn→∞ T (q′n, 0).
(b) Show that T e is linear(c) Show that T e = T on Q× {0}.
(9) For each T > 0, show that HT is indeed a Banach space.
16
-
CHAPTER 4
SDE’s
Let’s consider the SDE
dXt = b(Xt)dt+ σ(Xt)dWt
X0 = x◦.
where W is a standard Brownian motion. This is short for the integral equation
Xt = x◦ +
∫ ts=0
b(Xs)ds+
∫ ts=0
σ(Xs)dWs.
We will assume that b and σ are Lipshitz with common Lipshitz constant KL.We will use a Picard iteration. Fix T > 0. Define
X(1)t
def= x◦
X(n+1)t
def= x◦ +
∫ ts=0
b(X(n)s )ds+
∫ ts=0
σ(X(n)s )dWs
for 0 ≤ t ≤ T . Let’s formalize this as a recursion on H(T ), where H(T ) was introduced in Chapter 2. ForZ ∈ H(T ), let’s define
(22)
TAt (Z)def=
∫ ts=0
b(Zs)ds
TBt (Z)def=
∫ ts=0
σ(Zs)dWs
Tt(Z)def= x◦ + T
At (Z) + T
Bt (Z)
for all t ∈ [0, T ]. We have defined X(n+1) = T (X(n)) and we want this to solve
X = T (X)
Let’s first use the Lipshitz requirement to see that TA and TB map H(T ) into itself. Since b and σ areLipshitz,
|b(x)| ≤ |b(0)|+ |b(x)− b(0)| ≤ |b(0)|+KL|x| ≤ K(1 + |x|)|σ(x)| ≤ |σ(0)|+ |σ(x)− σ(0)| ≤ |σ(0)|+KL|x| ≤ K(1 + |x|)
where K = max{|b(0)|, |σ(0)|,KL}. Fix Z ∈ H(T ). Suppose that K > 0 is such that
|b(x)| ≤ K(1 + |x|) and |σ(x)| ≤ K(1 + |x|)
for all x ∈ R. For 0 ≤ t ≤ T ,
(TAs (Z))2 ≤
∣∣∣∣∫ sr=0
b(Zr)dr
∣∣∣∣2 ≤ ∣∣∣∣∫ sr=0
|b(Zr)|dr∣∣∣∣2 ≤ ∣∣∣∣∫ t
r=0
|b(Zr)|dr∣∣∣∣2
≤ K∣∣∣∣∫ tr=0
(1 + |Zr|)dr∣∣∣∣2 ≤ Kt{1 + sup
0≤r≤T|Zr|
}2≤ 2Kt
{1 + sup
0≤r≤T|Zr|2
}so
‖TA(Z)‖2H(T ) ≤ 2KT (1 + ‖Z‖2H(T )).
17
-
We next use the Ito isometry to see that
‖TB(Z)‖2H(T ) ≤ 2E
[∫ Ts=0
σ2(Zs)2ds
]≤ K2E
[∫ Ts=0
(1 + |Zs|)2ds
]
≤ 2K2E
[∫ Ts=0
{1 + |Zs|2
}ds
]≤ 2K2T
{1 + ‖Z‖2H(T )
}Let’s next show that the X(n)’s converge. Fix Z(1) and Z(2) in H(T ). Fix 0 ≤ s ≤ t ≤ T . Then
|TAs (Z(1))− TAs (Z(1))|2 =∣∣∣∣∫ sr=0
{b(Z(1)r )− b(Z(2)r )
}dr
∣∣∣∣2 ≤ t∫ sr=0
∣∣∣b(Z(1)r )− b(Z(2)r )∣∣∣2 dr≤ t∫ tr=0
∣∣∣b(Z(1)r )− b(Z(2)r )∣∣∣2 dr ≤ K2t∫ tr=0
∣∣∣Z(1)r − Z(2)r ∣∣∣2 dr≤ K2T
∫ tr=0
sup0≤r′≤r
∣∣∣Z(1)r′ − Z(2)r′ ∣∣∣2 drThus
sup0≤s≤t
|TAs (Z(1))− TAs (Z(1))|2 ≤ Kt∫ tr=0
sup0≤r′≤t
∣∣∣Z(1)r − Z(2)r ∣∣∣2 drso taking expectations we get that
‖TA(Z(1))− TA(Z(1))‖2H(t) ≤ K2T
∫ tr=0
‖Z(1) − Z(2)‖2H(r)dr.
To similarly bound TB , we use Doob’s inequality. For 0 ≤ t ≤ T and get that
E[
sup0≤s≤t
|TBs (Z(1))− TBs (Z(1))|2]≤ 2E
[|TBt (Z(1))− TBt (Z(1))|2
]≤ 2E
[∫ tr=0
{σ(Z(1)r )− σ(Z(2)r )
}2dr
]≤ 2K2
∫ tr=0
E[∣∣∣Z(1)r − Z(2)r ∣∣∣2] dr ≤ 2K2 ∫ t
r=0
E[
sup0≤r′≤r
∣∣∣Z(1)r′ − Z(2)r′ ∣∣∣2] dr.Here we get that
‖TB(Z(1))− TB(Z(1))‖2H(t) ≤ 2K2
∫ tr=0
‖Z(1) − Z(2)‖2H(r)dr.
Combining things together, we get that
(23) ‖T (Z(1))− T (Z(1))‖2H(t) ≤ 2K2(T + 2)
∫ tr=0
‖Z(1) − Z(2)‖2H(r)dr.
Thus in particular
E[∣∣∣X(n+1)t −X(n)t ∣∣∣2] ≤ (2K2(T + 2)t)n−1(n− 1)! sup0≤s≤T E
[∣∣∣X(1)s −X(0)s ∣∣∣2]To show convergence, we want to show that
(24)
∞∑n=1
{E[
sup0≤t≤T
∣∣∣X(n+1)t −X(n)t ∣∣∣2]}1/2
-
Collecting things together and using (23), we have that
E[|Xt − Tt(X)|2] ≤ 2E[|Xt −X(n+1)t |2] + 2E[|Tt(X(n))− Tt(X)|2]
≤ 2‖X −X(n)‖2H(T ) + 2K2(T + 2)T‖X(n) −X‖H(T ).
Letting n→∞, we indeed get that X = T (X).Let’s finally show uniqueness. Assume that X and Y are two solutions. We then have that
E[|Xt − Yt|2
]≤ 2K2(T + 2)
∫ ts=0
E[|Xs − Ys|2
]ds
Gronwall’s inequality implies that X − Y = 0; i.e., X = Y .
Exercises(1) Show that TA is adapted if Z ∈ H(T ) of (22) is adapted.(2) Fix λ ∈ R and consider the SDE
(25)dXt = λ(Xt − x̄)dt+ σdWt t ≥ 0X0 = x◦
(a) Show
Xt = eλtx◦ + (1− eλt)x̄+ σeλt
∫ ts=0
e−λsdWs
solves (25).
(b) Set µ(t)def= E[Xt]. Show that µ̇(t) = λ(µ(t)− x̄).
(c) Set R(t)def= E[(Xt − x̄)2]. Show that
Ṙ(t) = 2λR(t) + σ2
19
-
CHAPTER 5
Ito’s formula
Let M be a martingale with quadratic variation 〈M〉. Also fix f ∈ P and define
Atdef=
∫ ts=0
fsds
Xt = At +Mt
for all t ∈ [0, T ]. Fix φ ∈ C∞b (R) (the collection of infinitely-differentiable functions all of whose derivativesare bounded). We want to write φ(X) as a stochastic integral.
We need some bounds. First, let’s assume that M and 〈M〉 are bounded by some constant K > 0.Secondly, let’s assume that f is bounded by some constant Kf > 0. Thirdly, let’s assume that φ and its firstthree derivatives are bounded by some constant Kφ.
Fix t > 0. Fix also a positive integer N (which we will let become large) and define s(N)n as in (11). We
then have that
φ(Xt)− φ(X0) =6∑j=1
I(N)j
where
I(N)1
def=
N−1∑n=0
φ′(Xs(N)n
){As(N)n+1−A
s(N)n
}I
(N)2
def=
N−1∑n=0
φ′(Xs(N)n
){Ms(N)n+1−M
s(N)n
}I
(N)3
def=
1
2
N−1∑n=0
φ′′(Xs(N)n
){Ms(N)n+1−M
s(N)n
}2I
(N)4
def=
1
2
N−1∑n=0
φ′′(Xs(N)n
){As(N)n+1−A
s(N)n
}2I
(N)5
def=
N−1∑n=0
φ′′(Xs(N)n
){Ms(N)n+1−M
s(N)n
}{As(N)n+1−A
s(N)n
}I
(N)6
def=
N−1∑n=0
E(Xs(N)n, X
s(N)n+1−X
s(N)n
)where
E(x; y) def= φ(x+ y)−{φ(x)− φ′(x)y − 1
2φ′′(x)y2
}= x3
∫ 1θ=0
(1− s)2φ′′′(x+ θy)dθ.
Let’s first rewrite some things. Defining
ξ(N)t
def=
N−1∑n=0
φ′(Xs(N)n
)χ(s
(N)n ,s
(N)n+1]
(s) 0 ≤ s ≤ t
we have that ξ(N) is bounded and that
limN→∞
ξ(N)t = φ
′(Xt)
21
-
for all t ≥ 0. Thus
I(N)1 =
∫ ts=0
ξ(N)s fsds and I(N)2 =
∫ ts=0
ξ(N)s dMs;
thus
limN→∞
I(N)1 =
∫ ts=0
φ′(Xs)fsds =
∫ ts=0
φ′(Xs)dAs
limN→∞
I(N)2 =
∫ ts=0
φ′(Xs)dMs;
the former occurs pointwise and the latter occurs in L2; thus both occur in probability.
Let’s next bound I(N)4 . Note that
|I(N)4 | ≤ 12KφK2f t
(t
N
);
thus limN→∞ I(N)4 = 0 P-a.s.
We can also bound I(N)5 fairly easily. We have that
|I(N)5 | ≤ KφKf sup0≤n≤N−1
|Ms(N)n+1−M
s(N)n|;
thus limN→∞ I(N)5 = 0 P-a.s.
To bound the remaining parts, we will need some calculations from Chapter 2. Recall (12) and (16).
Let’s first bound I(N)6 . We have that
|I(N)6 | ≤3
2Kφ
N−1∑n=0
{K3f
(t
N
)3+ |M
s(N)n+1−M
s(N)n|3}≤ 3
2Kφ
{K3f t
(t
N
)2+ ξaNV
(N)
}.
Thus (using (13)), we have that limN→∞ E[|I(N)6 |] = 0.Thus Let’s next look at I
(N)3 . Define
Ī(N)3
def=
1
2
N−1∑n=0
φ′′(Xs(N)n
){〈M〉
s(N)n+1− 〈M〉
s(N)n
}.
Defining now
ξ̄(N)t
def=
1
2
N−1∑n=0
φ′′(Xs(N)n
)χ(s
(N)n ,s
(N)n+1]
(s), 0 ≤ s ≤ t
we have that
Ī(N)3 =
∫ ts=0
ξ̄(N)s d 〈M〉s .
Since limN→∞ ξ̄(N) = 12φ
′′(X), we have that
limN→∞
Ī(N)3 =
1
2
∫ ts=0
φ′′(Xs)d 〈M〉s
pointwise, and thus in probability.
We want to show that I(N)3 − Ī
(N)3 is in fact small. We will use here (15). Note that
I(N)3 − Ī
(N)3 =
1
2
N−1∑n=0
φ′′(Xs(N)n
)D(N)n .
22
-
Recalling (9) we have that
E[|I(N)3 − Ī
(N)3 |2
]=
1
4
N−1∑n=0
E[(φ′′(X
s(N)n
))2 (
D(N)j
)2]
≤ Kφ4
N−1∑n=0
E[(D(N)n
)2]Using (17), we indeed get that limN→∞ E[|I(N)3 − Ī
(N)3 |2] = 0.
Exercises(1) Fix t > 0. Let’s again use the definition (11). Show that
limN→∞
N−1∑n=0
(Xs(N)n+1−X
s(N)n
)2= 〈M〉t ,
this being at least convergence in probability. Do not reprove (14).(2) Fix b ∈ R and σ > 0. Define
Xtdef= x exp
[(b− σ2/2)t+ σWt
].
Show that
dXt = bXtdt+ σXtdWt
X0 = x
23
-
CHAPTER 6
Feynman-Kac
Lets put some of our machinery to work. Fix functions b and σ which have linear growth and which areLipshitz-continuous. Define the partial differential operator
(L f)(x)def=
1
2σ2(x)f̈(x) + b(x)ḟ(x) x ∈ R
for f ∈ C∞(R). Consider the parabolic PDE
(26)
∂v
∂t(t, x) = L v(t, x) t > 0, x ∈ R
v(0, x) = v◦. x ∈ R
We want to represent the solution of this via a path integral. For each x ∈ R, let’s solve
(27)dXxt = b(X
xt )dt+ σ(X
xt )dWt
Xx0 = x.
Assume that (26) has a solution which is smooth in time and space and such that v and its partialderivatives in time and space are bounded. Fix T > 0 and x ∈ R. Define
(28) Ztdef= v(T − t,Xxt ) 0 ≤ t ≤ T
By Ito’s formula, Z is a martingale. Also, Z0 = v(T, x), and ZT = v◦(XxT ). Thus
v(T, x) = E[Z0] = E[ZT ] = E[v◦(XxT )].
Let’s next impose a boundary condition. Fix L > 0. Fix also two functions fL and fR in C[0,∞).
(29)
∂v
∂t(t, x) = L v(t, x) t > 0, 0 < x < L
v(0, x) = v◦. 0 < x < L
v(t, 0) = fL(t) t > 0
v(t, L) = fR(t) t > 0
Again fix T > 0 and x ∈ (0, L), and again define (27). Here, though, we define the stopping time.
(30) τdef= inf {t > 0 : Xxt 6∈ [0, L]} .
As in (28), let’s define
(31) Ztdef= v(T − t ∧ τ,Xxt∧τ ) 0 ≤ t ≤ T
Again Z is a martingale. Here, however,
ZT =
fL(T − τ) if τ < T and Xxτ = LfR(T − τ) if τ < T and Xxτ = Rv◦(X
xT ) if τ ≥ T
Thus we get that
v(T, x) = E[ZT ] = E[fL(T − τ)χ{τ
-
Note that the maximum principle follows. If v◦, fL, and fR are all nonnegative, then v must also benonnegative.
Thirdly, let’s consider the elliptic PDE
(32)
L v(t, x) = f(x) 0 < x < L
v(0) = gL
v(L) = gR
for some function f and some constants gL and gR. Fix x ∈ (0, L). We again define τ as in (30), and set
Ztdef= v(Xxt ).
ThendZt = f(X
xt )dt+ ḟ(X
xt )σ(X
xt )dWt.
Consequently, if P{τ 0 and x ∈ R, define
u(t, x) =
∫y∈R
1√2πt
exp
[− (x− y)
2
2t
]s(y)dy.
Fix also T > 0 and x∗ > 0. Define τdef= inf{t ≥ 0 : x∗ +Wt < 0}.
(a) Show that
∂u
∂t(t, x) =
1
2
∂2u
∂x2(t, x) t > 0, x ∈ R
limt↘0
u(t, x) = s(x) x ∈ R
(b) Show thatu(T, x∗) = E
[u(T − t, x∗ +Wt)χ{τ>t}
].
for each t ∈ (0, T ).(c) Show that
u(T, x∗) = P {τ ≥ T} .(d) Show that
P{τ < T} = 2∫ 0y=−∞
1√2πT
exp
[− (x
∗ − y)2
2T
]dy = 2P{x∗ +WT ≤ 0}.
26
-
CHAPTER 7
CIR Processes
Lets next look at CIR (Cox-Ingersoll-Ross) processes; i.e., solutions of the SDE
(33)dλt = −α(λt − λ̄)dt+ σ
√λtdWt
λ0 = λ◦
where α, λ̄, σ and λ◦ are all fixed constants, and W is a Brownian motion. These processes are heavily usedin finance. We’ll see why.
First, note that (33) only makes sense if λ is nonnegative. To separate the non negativity of λ from theexistence of the solution to (33), let’s replace (33) with
(34)dλt = −α(λt − λ̄)dt+ σ
√λt ∨ 0dWt
λ0 = λ◦.
Let’s first show that the solution of (34), if it exists, is indeed nonnegative. Then (33) and (34) areequivalent.
First fix ϕ ∈ C([0,∞); [0, 1]) such that ϕ ≤ χ(−∞,0]. Define
gϕ(x)def= χ(−∞,0)(x)
∫ xy=0
∫ yz=0
ϕ(z)dz dy. x ∈ R
Note also that
g′ϕ(x) = χ(−∞,0)(x)
∫ xz=0
ϕ(z)dz
g′′ϕ(x) = χ(−∞,0)(x)ϕ(x)
Applying Ito’s formula, we have that
gϕ(λt) = gϕ(λ◦)− α∫ ts=0
(λs − λ̄)g′ϕ(λs)ds+ 12σ2
∫ ts=0
(λs ∨ 0)g′′ϕ(λs)ds+Mt
where M is a martingale. Note that g′′ϕ has support on (−∞, 0) so (λ ∨ 0)g′′ϕ(λ) ≤ 0. Also note that
−α(λ− λ̄)g′η(λ) = −αλg′ϕ(λ) + αλ̄g′ϕ(λ).
We note that g′ϕ ≤ 0, so αλ̄g′ϕ(λ) ≤ 0. Secondly, g′ϕ has support in (−∞, 0) and is negative in its support,so λg′ϕ(λ) ≥ 0, so −αλg′ϕ(λ) ≤ 0. Combining things together, we get that
E[gϕ(λt)] ≤ 0.
Taking a sequence of ϕ’s with ϕ ↗ χ(−∞,0), we get that gϕ(x) ↗ x2χ(−∞,0)(x) (pointwise) and usingmonotone convergence, we get that E
[λ2tχ(−∞,0)(λt)
]= 0; thus indeed λt ≥ 0 P-a.s.
Let’s next construct a solution to (33) via relaxation. Fix ε and ε′ in [0, 1) and assume that λ and λ′
satisfy
(35)
λt = λ◦ − α∫ ts=0
{λs − λ̄
}ds+ σ
∫ ts=0
√λs ∨ εdWs
λ′t = λ◦ − α∫ ts=0
{λ′s − λ̄
}ds+ σ
∫ ts=0
√λ′s ∨ ε′dWs;
27
-
we are assured that λ and λ′ exist only for positive ε and ε′; setting ε or ε′ to be zero returns us to (34).
Define νtdef= λt − λ′t; then
νt = −α∫ ts=0
νsds+ σ
∫ ts=0
{√λs ∨ ε−
√λ′s ∨ ε′
}dWs.
Fix η ∈ (0, 1). Let ϕ ∈ C([0,∞); [0, 1]) be such that ϕ ≤ χ[η,η1/2]. Define
gϕ(x)def=
2
ln η−1
∫ |x|y=0
∫ yz=0
1
zϕ(z)dz dy.
Note that
g′′ϕ(x) =2
ln η−11
|x|ϕ(x) ≤ 2
ln η−1≤ 1|x|χ[η,η1/2](x) ≤
2
ln η−1min
{1
|x|,
1
η
}.
Note also that
(36) g′ϕ(x) =2
ln η−1
∫ |x|z=0
1
zϕ(z)dz ≤ 2
ln η−1
∫ ∞z=0
1
zχ[η,η1/2](z)dz ≤
2
ln η−1
{ln η1/2 − ln η
}= 1.
Applying Ito’s formula to gϕ(νt), we have that
gϕ (νt) = −α∫ ts=0
νsg′ϕ (ν
′s) ds
+1
2
∫ ts=0
(√λs ∨ ε−
√λ′s ∨ ε′
)2g′′ϕ (νs) ds+Mt
Now note that
(37)∣∣√x−√y∣∣ ≤√|x− y|
for all x and y in R+. Thus
1
2
∫ ts=0
(√λs ∨ ε−
√λ′s ∨ ε′
)2g′′ϕ (ν
′s) ds
≤ 12
∫ ts=0
|(λs ∨ ε)− (λ′s ∨ ε′)| g′′ϕ (ν′s) ds
≤ 12
∫ ts=0
{|(λs ∨ ε)− (λ′s ∨ ε)|+ |(λ′s ∨ ε)− (λ′s ∨ ε′)|} g′′ϕ (νs) ds
≤ 12
∫ ts=0
|λs − λ′s| g′′ϕ (νs) ds+1
2
∫ ts=0
|ε− ε′|g′′ϕ (νs) ds
≤ tln η−1
+|ε− ε′|tη ln η−1
Using (36), we also have that
−α∫ ts=0
νsg′ϕ (νs) ds ≤ α
∫ ts=0
|νs| ds.
Combining things together, we get that
E [gϕ (λt − λ′t)] ≤ α∫ ts=0
E [|λs − λ′s|] ds+{
1
ln η−1+|ε− ε′|η ln η−1
}t.
Letting ϕ↗ χ[η,η1/2], we get that gϕ(x)↗ ḡη(x) (pointwise) where
ḡη(x)def=
2
ln η−1
∫ |x|y=0
∫ yz=0
1
zχ[η,η1/2](z)dz dy.
Thus
E [ḡη (νt)] ≤ α∫ ts=0
E [|νs|] ds+{
1
ln η−1+|ε− ε′|η ln η−1
}t.
28
-
Let’s now compare ḡη to x 7→ |x|. Since ḡη is even, we only need to do this on R+. Since
ḡ′η(x) =2
ln η−1
∫ xz=0
1
zχ[η,η1/2](z)dz
on R+, we see that ḡ′η is nonnegative and nondecreasing on R+. Furthermore ḡη(0) = 0, so
0 ≤ ḡ′η(x) ≤ ḡ′η(√η) = 1
for all x > 0. (using a calculation like (36)). Furthermore ḡ′η is constant on [√η,∞), so in fact ḡ′η(x) = 1 for
x ∈ [√η,∞). Since gη(0) = 0, we in fact have that
|x| − gη(x) =∫ xy=0
{1− ḡη(y)} dy ≤∫ √ηy=0
dy =√η
for all x ∈ R. Summing up, we have that|x| ≤ gη(x) +
√η
for all x ∈ R.Thus
E [|νt|] ≤ α∫ ts=0
E [|νs|] ds+t
ln η−1+|ε− ε′|tη ln η−1
+√η.
Thus
E [|νt|] ≤ eαt{
σ2t
ln η−1+σ2|ε− ε′|tη ln η−1
+√η
}.
Letting ε and ε′ tend to zero and then letting η ↘ 0, we get that(38) lim
ε,ε′↘0E ||λs − λ′s|] = 0.
We also have uniqueness. If ε = ε′ = 0 in (35), then we have that
E [|λt − λ′t|] ≤ eαt{
σ2t
ln η−1+√η
}.
Letting η ↘ 0, we have thatE [|λt − λ′t|] = 0,
ensuring uniqueuenss.
Exercises(1) Prove (37).(2) For each ε > 0, show that the map x 7→
√x ∨ ε is Lipshitz-continuous and has at most linear growth.
(3) Let’s complete the calculations.(a) From (38), show that
limε,ε′↘0
E[
sup0≤s≤t
|λs − λ′s|]
= 0
(as usual, this is needed to ensure that λ has continuous paths, thus implying that λ is predictable).(b) Suppose that λ∗ is a continuous and adapted process such that
limε↘0
E[
sup0≤s≤t
|λs − λ∗s|]
= 0.
Show that λ∗ satisfies (34).
(4) Define r(t)def= E[|λt|]. Show that ṙ(t) = −α(r(t)− λ̄); thus
r(t) = λ◦e−αt + λ̄(1− e−αt).
(5) Show that if 2λ̄α > σ2, then a gamma distribution is a stationary distribution for λ; what are theparameters for the gamma distribution?
29
-
CHAPTER 8
Girsanov’s theorem
Let W be a Brownian motion with respect to {Ft}t≥0 (which may be right-continuous). Let f be abounded predictable process. Define
W̃tdef= Wt −
∫ ts=0
fsds.
Fix T > 0 and define
P̃(A) def= E
[χA exp
[∫ Ts=0
fsdWs −1
2
∫ Ts=0
f2s ds
]]. A ∈ F
We claim that P̃ is a probability measure and that, under P̃, {W̃}0≤s≤t is a Brownian motion with respectto {Ft}t≥0.
We start by defining
Lθtdef= exp
[∫ ts=0
(√−1θ + fs
)dWs −
1
2
∫ ts=0
(√−1θ + fs
)2ds
]for all θ ∈ R and t ∈ [0, T ]. Note that
Lθt = L0t exp
[√−1θW̃t +
1
2θ2t
].
By Ito’s formula,
(39) dLθt = Lθt
{(√−1θ + ft
)dWt −
1
2
(√−1θ + ft
)2dt
}+
1
2Lθ(√−1θ + ft
)2dt = Lθt
(√−1θ + ft
)dWt.
Thus Lθ is a martingale. We also have that Lθ0, and that
P̃(A) def= E[χAL0T ]. A ∈ FWe first note that by the martingale property,
P̃(Ω) = E[L0T ] = E[L00] = 1;
thus P̃ is indeed a probability measure.Secondly, let’s note that
Ẽ[exp
[√−1θW̃t +
1
2θ2t
] ∣∣∣∣Fs] = E[L0T exp
[√−1θW̃t + 12θ
2t] ∣∣∣∣Fs]
E[L0T |Fs]By the martingale property,
E[L0T |Fs] = L0t .We also have that
E[L0T exp
[√−1θW̃t +
1
2θ2t
] ∣∣∣∣Fs] = E [E[L0T |Ft] exp [√−1θW̃t + 12θ2t] ∣∣∣∣Fs]
= E[L0t exp
[√−1θW̃t +
1
2θ2t
] ∣∣∣∣Fs]= E[Lθt |Fs] = Lθs = L0s exp
[√−1θW̃s +
1
2θ2s
].
31
-
Thus
Ẽ[exp
[√−1θW̃t +
1
2θ2t
] ∣∣∣∣Fs] = exp [√−1θW̃s + 12θ2s].
Rearranging, we get that
Ẽ[exp
[√−1θ
(W̃t − W̃s
)] ∣∣∣∣Fs] = exp [−θ22 (t− s)].
Exercises(1) Prove (39) by considering real and imaginary parts.
32
-
CHAPTER 9
The martingale problem and Levy’s characterization of Brownianmotion
Suppose that X is a continuous stochastic process which is adapted to a filtration {Ft}t≥0. Supposethat X0 = 0 and that for every f ∈ C∞b (R) (i.e., f ∈ C2(R) such that f and its first two derivatives arebounded)
(40) f(Xt) = f(X0) +1
2
∫ tr=0
f̈(Xr)dr +Mt
where M is a martingale with respect to {Ft}t≥0 (which depends of course on f). This is an example of themartingale problem; see [?]. We claim that necessarily X is a Brownian motion with respect to {Ft}t≥0.
Fix θ ∈ R. Define fθ(x)def= exp
[√−1θx
]for all x ∈ R. Then
fθ(Xt) = fθ(X0)−1
2θ2∫ tr=0
fθ(Xr)dr +Mt.
This in turn implies that
fθ(Xt) = fθ(Xs)−1
2θ2∫ tr=s
fθ(Xr)dr +Mt −Ms.
Taking conditional expectations, we get that
(41) E[fθ(Xt)|Fs] = fθ(Xs)−1
2θ2∫ tr=s
E[fθ(Xr)|Fs]ds.
Consequently
(42) E[fθ(Xt)|Fs] = fθ(Xs) exp[−1
2θ2(t− s)
].
Rearranging, we get that
E[exp
[√−1θ(Xt −Xs)
] ∣∣∣∣Fs] = exp [−12θ2(t− s)].
Thus in particular, if M is a martingale with quadratic variation 〈M〉t = t, then M must be a Brownianmotion.
1. Poisson processes
We can do something similar with jump processes. Let’s do this in a general way. Fix a measure π on(R,B(R)). Define
(43) (L f)(x) =
∫y∈R{f(y)− f(x)}π(dy) x ∈ R
for all f ∈ B(R) (the collection of bounded measurable functions on R). Suppose that X is a right-continuousprocess such that
(44) f(Xt) = f(X0) +
∫ ts=0
(L f)(Xs)ds+Mt
33
-
where M is a martingale with respect to {FXt }t≥0, where
FXtdef= σ{Xs; s ≤ t}.
By a small amount of work, we can extend (44). For f ∈ B(R+ × R) which is differentiable in the firstargument, we get that
(45) f(t,Xt) = f(0, X0) +
∫ ts=0
{∂f
∂t(s,Xs) + (L f)(s,Xs)
}ds+Mt
where M is a martingale.Let’s assume that X0 = x
∗. Define
τdef= inf {t ≥ 0 : Xt 6= x∗} .
We are interested in the statistics of (τ,Xτ ); i.e., when X jumps, and where it jumps to. We will see that(43) leads to something nice.
Let’s also carefully pick f in (45). Namely, fix λ > 0 and fix φ ∈ B(R) Let’s pick f(t, x) = e−λtφ(x).For s ∈ [0, τ), we have that φ(Xs) = φ(x∗). For t > 0, we optional sampling implies that
(46)E [exp [−λ(τ ∧ t)]φ(Xτ∧t)] = φ(x∗) + E
[∫ τ∧ts=0
e−λsds {−λφ(x∗) + 〈φ, π〉 − φ(x∗)}]
= φ(x∗) +1
λE [1− exp [−λ(τ ∧ t)]] {〈φ, π〉 − (λ+ π(R \ {x∗}))φ(x∗)}
where
〈φ, π〉 def=∫x∈R
φ(x)π(dx).
Taking t↗∞, we get that
(47) E [exp [−λτ ]φ(Xτ )] = φ(x∗) + {1− E [exp [−λτ ]]}{
1
λ〈φ, π〉 − λ+ π(R \ {x
∗})λ
φ(x∗)
}Let’s begin by setting φ = χR\{x∗}. Then (47) implies that
E [exp [−λτ ]] = π(R \ {x∗})
λE [1− exp [−λτ ]] .
Rearranging this, we get that
E [exp [−λτ ]] = π(R \ {x∗})
λ+ π(R \ {x∗}).
Thus τ is exponential with parameter π(R \ {x∗}). Reinserting this back in (47), we get that
E [exp [−λτ ]φ(Xτ )] = φ(x∗) +λ
λ+ π(R \ {x∗})
{1
λ〈φ, π〉 − λ+ 1
λφ(x∗)
}= φ(x∗) +
π(R \ {x∗})λ+ π(R \ {x∗})
{〈φ, π̃x∗〉 −
λ+ π(R \ {x∗})λ
φ(x∗)
π(R \ {x∗})
}where
πx∗(A)def=
π(A \ {x∗})π(R \ {x∗})
A ∈ B(R)
Fix next A ⊂ R \ {x∗}, and set φ(x) = χA. Then
E [exp [−λτ ]χA(Xτ )] =π(R \ {x∗})
λ+ π(R \ {x∗})π̃x∗(A)
Taking λ↘ 0, we see that Xτ has distribution π̃x∗ , and τ and Xτ are independent.
34
-
Exercises(1) Show that (41) implies (42).(2) Let’s consider an alternate representation of (40). Fix f ∈ C2b (R), 0 ≤ s0 < s1 . . . sN ≤ s < t, and{ϕn}Nn=1. Consider the equality
(48) E
[{f(Xt)− f(Xs)−
∫ tr=s
1
2f̈(Xr)dr
} N∏n=1
ϕn(Xsn)
]= 0.
(a) Show that if (40) holds, then (48) holds.
(b) Show that if Ftdef= σ{Xs; 0 ≤ s ≤ t}, then (48) implies (40).
(3) Prove (46).
35
-
CHAPTER 10
Martingales as time-changed Brownian motions
Let M be martingale with respect to a filtration {Ft}t≥0. Suppose that 〈M〉 is differentiable and thatd〈M〉tdt ≥ m◦ for all t > 0, for some m◦ > 0. For all t ≥ 0, define
τ(t)def= inf {s ≥ 0 : 〈M〉s ≥ t} .
Then
(49) 〈M〉τ(t) = t,
so in a sense τ(t) = 〈M〉−1t . Note that since 〈M〉t/m◦ ≥ t, we must have that τ(t) ≤ t/m◦, so τ(t) is bounded.By Ito’s formula, we have that
f(Mt) = f(M0) +1
2
∫ ts=0
f̈(Ms)d 〈M〉s + Ñt
where N is a martingale (with respect to {Ft}t≥0). We then have that
(50) f(Mτ(t)) = f(M0) +1
2
∫ τ(t)s=0
f̈(Ms)d 〈M〉s + Ñτ(t).
Set
F̃tdef= Fτ(t).
By optional sampling, Ñ is a martingale with respect to {F̃t}t≥0. By differentiating (49), we get that
τ̇(t)d 〈M〉dt
(τ(t)) = 1.
Making the transformation u = 〈M〉t in the integral term in (50), we get that
f(Mτ(t)) = f(M0) +1
2
∫ ts=0
f̈(Ms)ds+ Ñτ(t).
Thus {Mτ(t)}t≥0 is a martingale with respect to {F̃t}t≥0.
Exercises(1) Suppose that f is a bounded predictable (and nonzero for simplicity) function. Show that
limη↘0
P
sup0≤s≤t≤T|t−s|≤η
∣∣∣∣∫ tr=s
frdWr
∣∣∣∣ ≥ ε = 0
for all ε > 0.
37
-
CHAPTER 11
Filtering
Suppose that we have a plant process X and an observation process Y which are given by the coupledSDE
dXt = b(Xt)dt+ σ(Xt)dWt
X0 = x◦
dYt = h(Xt)dt+ dVt
Y0 = 0
Suppose that we want to compute
πt(A)def= P{Xt ∈ A|Yt} A ∈ B(R)
where
Ytdef= σ{Ys; s ≤ t}.
Note that
Yt = Vt +
∫ ts=0
h(Xs)ds.
Let’s use Girsanov’s theorem to make Y into a Brownian motion. Define
(51) Ltdef= exp
[−∫ ts=0
h(Xs)dVs −1
2
∫ ts=0
h2(Xs)ds
]= exp
[−∫ ts=0
h(Xs)dYs +1
2
∫ ts=0
h2(Xs)ds
].
Define
(52) P̃t(A)def= E [χALt] .
Then P̃t is a probability measure, and under P̃t, {(Ws, Ys); 0 ≤ s ≤ t} is a 2-dimensional standard Brownianmotion (and W and Y are independent). We also have that
P(A) = Ẽt[χAL
−1t
]A ∈ F
and that
(53) L−1t = exp
[∫ ts=0
h(Xs)dYs −1
2
∫ ts=0
h2(Xs)ds
]For any A ∈ B(R) we have that
π̃t(A) =Ẽt[χA(Xt)L
−1t
∣∣Yt]Ẽt[L−1t |Yt]
.
We want to find the evolution of the numerator and denominator. We will heavily use the statisticalstructure of things under P̃t. By Ito’s formula (and (53)),
f(Xt)Lt = f(X0)L−10 +
∫ ts=0
(L f)(Xs)L−1s ds+
∫ ts=0
(hf)(Xs)L−1s dYs +
∫ ts=0
(σf ′)(Xs)L−1s dWs
for 0 ≤ t ≤ T , where
(L f)(x) =1
2σ2(x)f̈(x) + b(x)ḟ(x)
39
-
Thus
Ẽt[f(Xt)Lt
∣∣Yt] = Ẽt [f(X0)L−10 ∣∣Yt]+ Ẽt [∫ ts=0
(L f)(Xs)L−1s ds
∣∣Yt]+ Ẽt
[∫ ts=0
(hf)(Xs)L−1s dYs
∣∣Yt]+ Ẽt [∫ ts=0
(σf ′)(Xs)L−1s dWs
∣∣Yt]Next fix s ∈ [0, t] and φ ∈ B(R). Since {Yt′ − Ys} is independent of Yy under P̃T ,
Ẽt[φ(Xs)L
−1s
∣∣Yt] = Ẽt [φ(Xs)L−1s ∣∣σ{Yt′ − Ys; s ≤ t′ ≤ t} ∨ Ys] = Ẽt [φ(Xs)L−1s ∣∣Ys] .Next, note that the statistics of {(Wr, Yr); 0 ≤ r ≤ s} are the same under P̃t and P̃s. Thus
(54) Ẽt[φ(Xs)L
−1s
∣∣Yt] = Ẽs [φ(Xs)L−1s ∣∣Fs] .We first have (from (54)) that
Ẽt[f(X0)L
−10
∣∣Yt] = Ẽ0 [f(X0)L−10 ∣∣Y0] .and
Ẽt[∫ t
s=0
(L f)(Xs)L−1s ds
∣∣Yt] = ∫ Ts=0
Ẽt[(L f)(Xs)L
−1s
∣∣Yt] ds = ∫ Ts=0
Ẽs[(L f)(Xs)L
−1s
∣∣Ys] ds.Next recall the s
(N)n ’s of (11). We then have that
Ẽt[∫ t
s=0
(hf)(Xs)L−1s dYs
∣∣∣∣Yt] = limN→∞ Ẽt[N−1∑n=1
(hf)(Xs(N)n
)L−1s(N)n
{Ys(N)n+1− Y
s(N)n
} ∣∣∣∣Yt]
= limN→∞
N−1∑n=1
Ẽt[(hf)(X
s(N)n
)L−1s(N)n
{Ys(N)n+1− Y
s(N)n
} ∣∣∣∣Yt]
= limN→∞
N−1∑n=1
Ẽt[(hf)(X
s(N)n
)L−1s(N)n
∣∣∣∣Yt]{Ys(N)n+1 − Ys(N)n }= limN→∞
N−1∑n=1
Ẽs[(hf)(X
s(N)n
)L−1s(N)n
∣∣∣∣Ys]{Ys(N)n+1 − Ys(N)n }=
∫ ts=0
Ẽs[(hf)(X
s(N)n
)L−1s(N)n
∣∣∣∣Ys] dYsFinally note that under P̃t, Ws(N)n+1 −Ws(N)n is independent of Xs(N)n and Y .We again use (11). We have
that
Ẽt[∫ t
s=0
(σf ′)(Xs)L−1s dWs
∣∣∣∣Yt]= limN→∞
Ẽt
[N−1∑n=0
(σf ′)(Xs(N)n
)L−1s(N)n
{Ws(N)n+1−W
s(N)n
} ∣∣∣∣Yt]
= limN→∞
N−1∑n=0
Ẽt[(σf ′)(X
s(N)n
)L−1s(N)n
{Ws(N)n+1−W
s(N)n
} ∣∣∣∣Yt]
= limN→∞
N−1∑n=0
Ẽt[(σf ′)(X
s(N)n
)L−1s(N)n
Ẽ[Ws(N)n+1−W
s(N)n
∣∣∣∣Yt ∨ σ(Xs(N)n )] ∣∣∣∣Yt] = 0.
40
-
Collecting things together, we get that
(55)
Ẽt[f(Xt)Lt
∣∣Yt] = Ẽ0 [f(X0)L−10 ∣∣Y0]+ ∫ Ts=0
Ẽs[(L f)(Xs)L
−1s
∣∣Ys] ds+
∫ ts=0
Ẽs[(hf)(X
s(N)n
)L−1s(N)n
∣∣∣∣Ys] dYsLet’s now assume that there is a (random) field {v(t, x); t ≥ 0, x ∈ R} such that
Ẽt[χA(Xt)L
−1t
∣∣Yt] = ∫x∈A
v(t, x)dx.
Then
πt(A) =
∫x∈A v(t, x)dx∫x∈R v(t, x)dx
.
(note that in general∫x∈R v(t, x)dx 6= 1). From (54), we get the Zakai equation
(56) dv(t, x) = L ∗v(t, x)dt+ h(x)v(t, x)dYt
Exercises(1) Prove the second representation of (51)
(2) Show that P̃t is a probability measure and that under P̃t, {(Ws, Ys); 0 ≤ s ≤ t} is a 2-dimensionalBrownian motion.
(3) If B is a Brownian motion and 0 ≤ s ≤ t, show that σ{Br; r ≤ t} = σ{Br; r ≤ s}∨σ{Br−Bs; s ≤ r ≤ t}.(4) Show that (56) comes from (54).(5) Suppose that limn→∞Xn = 0 in L
1. Let G be a sub-sigma-algebra of F . Show that limn→∞ E[Xn|G ] =0. Hint: write Xn = X
+n −X−n .
41
-
CHAPTER 12
Option Pricing
Suppose that we have an asset which evolves according to
dSt = bStdt+ σStWt
S0 = S◦.
Suppose that we have a European option. Suppose that the payoff function is φ at expiry T ; i.e., we get thepayoff φ(ST ) at time T . We need to price the option at time 0. Let’s suppose also that the current interestrate is r.
Let Ṽt denote the value of the option at time t. We want to replicate Ṽ by using the stock and a bond.Namely, we want to find processes wa and wb such that
Ṽt = wat St + w
btBt
where Ḃt = rBt. Moreover, we want wa and wb to be such that
(57) dṼt = wat dSt + w
bḂtdt;
technically this is self-financing.Suppose that Ṽt = V (t, St) where V is sufficiently regular. Then (57) is equivalent to{∂V
∂t(t, St) +
∂V
∂S(t, St)bSt +
1
2σ2S2t
∂2V
∂S2(t, St)
}dt+
∂V
∂S(t, St)σStdWt
= wabStdt+ waσStdWt + w
brBtdt.
Let’s equate the dWt terms. We get that
(58) wat =∂V
∂S(t, St).
The right-hand side is called the delta of the option, and this is known as delta-hedging.Let’s now equate the dt terms and use (58). We get that
∂V
∂t(t, St) +
∂V
∂S(t, St)bSt +
1
2σ2S2t
∂2V
∂S2(t, St) =
∂V
∂S(t, St)bSt + w
brBt
=∂V
∂S(t, St)bSt + r (V (t, St)− waSt)
=∂V
∂S(t, St)bSt + r
(V (t, St)−
∂V
∂St(t, St)St
)In other words, V satisfies the PDE
(59)
∂V
∂t(t, S) + r
∂V
∂S(t, S) +
1
2σ2S2
∂2V
∂S2(t, S) = rV (t, S) t < T
V (T, S) = φ(S).
This is the famed Black-Scholes PDE.Let’s next see what happens if we represent V via Feynman-Kac. Suppose that
(60)dS̃t = rS̃tdt+ σS̃tdWt
S̃0 = S◦.
43
-
Then by Ito’s formula e−rtV (t, S̃t) is a martingale so
V (t, S◦) = E[e−rTV (t, S̃T )
]= E
[e−rTφ(S̃T )
].
This is the risk-neutral measure.
1. Bonds
We can do something similar for bonds. A bond is a promise to pay a coupon at a certain time T , atleast if the bond doesn’t default. The yield of the bond compensates for the credit risk.
Let τ be the time that the bond defaults. Suppose that we have a bond B◦ which pays a coupon of $1at time T ◦. The dynamics of the bond are thus
B◦t = exp
[−∫ T◦s=t
r(s)ds
]χ{τ>t}
Defining Jtdef= χ{τ≤t}, we have that
dB◦t = r(t)B◦t dt−B◦t−dJt
B◦T = χ{τ>T}.
We want to understand the risk-neutral price of default. Fix T ∈ (0, T ◦) and consider a credit derivativewhich pays $1 if at time T if τ > T (this will of course be another bond, but let’s avoid that interpretation
for the moment). The price of the credit derivative is Ṽ (t). We want to replicate the credit derivative witha riskless asset Π (which grows at rate r∗) and with the bond B◦; namely, we want that
Ṽ (t) = w(1)t Πt + w
(2)t B
◦t
for some weights w(1) and w(2). We want this porfrolio to be self-financing, i.e., that
dṼ (t) = w(1)t− rΠtdt+ w
(2)t− dB
◦t
= w(1)t− r∗Πt−dt+ w
(2)t−{r(t)B◦t−dt−B◦t−dJt
}Let’s now assume that
Ṽ (t) = V (t, Bt);
i.e., the price of the credit derivative is a function of time and the bond price. Then we have that{∂V
∂t(t, Bt−) +
∂V
∂B(t, Bt−)r(t)Bt−
}dt+ {V (t, 0)− V (t, Bt−)} dJt
= w(1)t− r∗Πt−dt+ w
(2)t− {r(t)Bt−dt−Bt−dJt}
Equating the values of the jumps, let’s set
(61) w(2)t = −
V (t, 0)− V (t, Bt)Bt
;
then
w(2)t− = −
V (t, 0)− V (t, Bt−)Bt−
.
Using the fact that
w(1)t Πt = V (t, Bt)− w
(2)t Bt
we have that
∂V
∂t(t, Bt−) +
∂V
∂B(t, Bt−)r(t)Bt− = r
∗V (t, Bt−) + (r(t)− r∗)w(2)t−Bt−= r∗V (t, Bt−)− (r(t)− r∗) {V (t, 0)− V (t, Bt−)}= r(t)V (t, Bt−)− (r(t)− r∗)V (t, 0)
44
-
Thus the Black-Scholes PDE is
(62)
∂V
∂t(t, B) +
∂V
∂B(t, B)r(t)B = r(t)V (t, B)− (r(t)− r∗)V (t, 0)
= r∗V (t, B)− (r(t)− r∗) {V (t, 0)− V (t, B)}The boundary conditions on this PDE are
V (t, 0) = 0 if t < T
V (T,B) = 1 if B > 0
Using the first boundary condition in the PDE, we get that in fact
∂V
∂t(t, B) +
∂V
∂B(t, B)r(t)B = r(t)V (t, B)
Turning things around, let’s define w(2) as in (61) and suppose that
V (t, Bt) = w(1)t Πt + w
(2)t Bt.
We then have that
dV (t, Bt) =
{∂V
∂t(t, Bt−) +
∂V
∂B(t, Bt−)r(t)Bt−
}dt+ {V (t, 0)− V (t, Bt−)} dJt
= r∗V (t, Bt−)dt− (r(t)− r∗) {V (t, 0)− V (t, Bt−)} dt+ {V (t, 0)− V (t, Bt−)} dJt= r∗V (t, Bt−)dt+ (r(t)− r∗)w(2)t− dt− w
(2)t− dJt
= r∗{V (t, Bt−)− w(2)t−Bt−
}+ w
(2)t− {r(t)Bt−dt−Bt−dJt}
= r∗w(1)t−Πt−dt+ w
(2)t− dBt
so the portfolio is indeed self-financing.Let’s now do the Feynman-Kac formula for (62). Assume that τ∗ has hazard function r∗(t)− r; i.e.,
P {τ∗ > t} = exp[−∫ ts=0
(r(s)− r∗)ds].
Defining
Jtdef= χ{τ∗≤t},
we have that J jumps from 0 to 1 at time τ∗ and thus that
dJt = (r(t)− r∗)(1− Jt−)dt+ dMtwhere M is a martingale. Let’s finally set
B∗tdef= exp
[−∫ Ts=t
r(s)ds
](1− Jt);
these are the dynamics of a bond with yield r∗ and default rate r(t)− r∗. Note that thus
dB∗t = r(t)B∗t−dt− (r(t)− r∗)Bt−dt− exp
[∫ Ts=t
r(s)ds
]dMt = r
∗B∗t−dt+ exp
[∫ Ts=t
r(s)ds
]dMt
Thus B∗ has the required form of a martingale discounted by the risk-free rate.Setting
Ztdef= V (t, B∗t ) exp [−r∗t] ,
we have that
dZt =
{∂V
∂t(t, B∗t−) +
∂V
∂B(t, B∗t−)r(t)B
∗t− − r∗V (t, B∗t−)
}(1− Jt−)dt+ {V (t, 0)− V (t, B∗t−)}dJt
= (r(t)− r∗)V (t, B∗t−)dt− V (t, B∗t−) {(r(t)− r)(1− Jt−)dt+ dMt}= −V (t, B∗t−)dMt
45
-
Thus Z is a martingale, so
V (0, B∗0) = e−r∗TE[V (T,B∗T )] = e−r
∗TP{τ∗ > T}.This is what one would expect.
Exercises(1) Show that if V satisfies (59), wat =
∂V∂S (t, St) and w
b = B−1t {V (t, St)− wat St}, then (57) holds.(2) Show that if S̃ satisfies (60) then e−rtV (t, S̃t) is indeed a martingale.
46
-
CHAPTER 13
Feller Conditions
Suppose that we have an SDE
(63)dXt = b(Xt)dt+ σ(Xt)dWt
X0 = x◦
Suppose that a < x◦ < b. We want to understand how X exits the interval (a, b). We suppose that b and σare continuous on (α, β). We suppose that σ > 0 on (a, b).
Define first Yt = p(Xt) where p is smooth. Then
dYt =
{bp′ +
1
2σp′′
}(Xt)dt+ (σp
′)(Xt)dWt
Let’s assume that
bp′ +1
2σp′′ ≡ 0
so that Y is a martingale. In other words, we want that
p′′(x) = − 2bσ2
(x)p′(x).
Since σ > 0 on (α, β), a solution of this PDE exists on (a, b). Adding the initial condition p(x◦) = 0 andp′(x◦) = 1, we have that
p(x) =
∫ xy=x◦
exp
[−∫ yz=x◦
2b
σ2(z)dz
]dy.
We note that p′ > 0 so p is invertible. It is known as the scale function.Defining
σ̃(y)def= (p′σ)(p−1(y)) y ∈ (α, β)
we have that
dYt = σ̃(Yt)dWt
Y◦ = 0.
Fix now [α, β] ⊂⊂ (a, b) containing x◦, define
τα,βdef= inf {t > 0; Xt 6∈ [α, β]} .
Then
κα,βdef= inf
y∈p[α,β]σ̃(y)
is positive. We first claim that X exits [α, β]; i.e., Y exits p[α, β]. Note that since α < x◦ < β, p is increasing,and p(x◦) = 0, p(α) < 0 < p(β). We have that
Y 2τα,β∧t =
∫ τα,β∧ts=0
σ̃2(Ys)ds+
∫ τα,β∧ts=0
Ysσ̃(Ys)dWs.
Thus
E[Y 2τα,β∧t
]≥ κ2E [τα,β ∧ t] .
Thus
E [τα,β ∧ t] ≤1
κ2max
{|p(α)|2, |p(β)|2
}
-
so in fact
(64) E [τα,β ] ≤1
κ2max
{|p(α)|2, |p(β)|2
}and hence P{τα,β
-
and upon letting n↗∞, we have thatE[Ỹt
∣∣Ws] ≤ Ỹs.Thus Ỹ is a nonnegative supermartingale, and Ỹ
def= limt→∞ Ỹt exists P-a.s. If t < τ , then t < ταn,βn < τ
for large enough n, so in fact Ỹ(n)t = Yt− p(a+) for n large enough. Thus Yt− p(a+)− Ỹt if t < τ , so in fact
(66) limt↗τ
Yt = Ỹ .
A symmetric argument gives us (66) if p(b−) −∞ and p(b−) =∞. Then p is a diffeomorphism from [x◦, b) to [0,∞), so
P{
sup0≤t
-
We also note that if U and V satisfy (67), then
|U(y)− V (y)| ≤MK∫ |y|w=0
|U(w)− V (w)|dw
so U = V by Gronwall’s inequality; in other words, the solution of (67) is unique.We note that the Un’s of (68) are nonnegative; thus U ≥ 0 and thus in fact U ≥ 1. Fix K > 0 and
define
mKdef= inf|y|≤K
2
σ̃2(y).
Then for y ≥ K, we have that
U(y) ≥∫ yz=K
{∫ Kw=0
mKdw
}dz = (y −K)KmK .
In other words, limy↗∞ U(y) =∞. Similarly, limy↘−∞ U(y) =∞.From a probabilistic standpoint, U is important since
Ztdef= e−tU(Yt)
is a nonnegative martingale. As in our above martingale analysis, this means that
Z̃(t)ndef= Zt∧ταn,βn
is a nonnegative martingale. Thus Z̃tdef= limn→∞ Z̃
(t)n is well-defined and is a nonnegative supermartingale.
Thus Zdef= limt↗∞ Z̃t is well-defined and
E[
limt↗τ
e−tU(Yt)
]= E[Z] ≤ E[U(Y0)] = 1.
If τ
-
Exercises
(1) Let W be a Brownian motion. Define Ytdef= W 3t .
(a) Show that dYt = 3Y2/3t dWt + 3Y
1/3t dt.
(b) Find the scale function for Y .(2) Show that if p(a+) = −∞, then (65) holds.(3) Show that if p(b−)
-
CHAPTER 14
Wong-Zakai
Let W be a Brownian motion. For each δ ∈ (0, 1), define
W δtdef=
(t
δ+ 1−
⌊t
δ
⌋)Wbt/δcδ +
(t
δ−⌊t
δ
⌋)W(bt/δc+1)δ.
In other words, if nδ ≤ t < (n+ 1)δ,
Ẇ δt =W(n+1)δ −Wnδ
δ.
Suppose now that b and σ are smooth drift and diffusion coefficients, and that they and their derivativesare bounded. Fix x◦ ∈ R. and consider the random ODE
Ẋδt = b(Xδt ) + σ(X
δt )Ẇ
δt
Xδ0 = x◦.
Where does Xδ go as δ ↘ 0?Let X0 solve
dXt = b(Xt) + σ(Xt)dWt +1
2(σσ′)(Xt)
X0 = x◦.
We claim that for every T > 0,
(69) limδ↘0
sup0≤t≤T
E[∣∣Xδt −Xt∣∣] = 0.
Throughout this chapter, we will use K to denote a constant (which may change from line to line) whichdepends only on b and σ and their derivatives.
Note that the small intervals compensate the δ−1 in the denominator. For δ > 0, n ∈ N, and nδ ≤ t <(n+ 1)δ,
(70) |Xδt −Xδnδ| ≤ K{δ +
∣∣W(n+1)δ −Wnδ∣∣} .For δ > 0 and t ≥ 0, define
τδ(t)def= bto/δcδ.
Then
Xδt = x◦ +
∫ ts=0
b(Xδs )ds+
∫ ts=0
σ(Xδs )
{Wτδ(s)+δ −Wτδ(s)
δ
}ds
Let’s look at the second term. We have that∫ ts=0
σ(Xδs )
(Wτδ(s)+δ −Wτδ(s)
δ
)ds =
∫ τδ(t)s=0
σ(Xδs )
(Wτδ(s)+δ −Wτδ(s)
δ
)ds+ EA,δt
where
EA,δtdef=
∫ ts=τδ(t)
σ(Xδs )
(Wτδ(s)+δ −Wτδ(s)
δ
)ds.
We calculate that ∣∣∣EA,δt ∣∣∣ ≤ K ∣∣Wτδ(t)+δ −Wτδ(t)∣∣ .53
-
Fix now n ∈ N; let’s consider ∫ (n+1)δnδ
σ(Xδs )
(W(n+1)δ −Wnδ
δ
)ds.
For s ∈ [nδ, (n+ 1)δ),
σ(Xδs ) = σ(Xδnδ) +
∫ sr=nδ
(bσ′)(Xδr )dr +
∫ sr=nδ
(σσ′)(Xδr )dr
(W(n+1)δ −Wnδ
δ
)and hence ∫ (n+1)δ
nδ
σ(Xδs )
(W(n+1)δ −Wnδ
δ
)ds = σ(Xδnδ)
{W(n+1)δ −Wnδ
}+
∫ (n+1)δs=nδ
∫ sr=nδ
(σσ′)(Xδr )dr
(W(n+1)δ −Wnδ
δ
)2ds+ EB,δn
where
EB,δndef=
∫ (n+1)δs=nδ
∫ sr=nδ
(bσ′)(Xδr )dr
(W(n+1)δ −Wnδ
δ
)ds
We note that
(71)
∫ (n+1)δs=nδ
{∫ sr=nδ
dr
}ds =
δ2
2.
Thus ∣∣EB,δn ∣∣ ≤ K ∣∣W(n+1)δ −Wnδ∣∣ δNext note that ∫ (n+1)δ
s=nδ
∫ sr=nδ
(σσ′)(Xδr )dr
(W(n+1)δ −Wnδ
δ
)2ds
=
∫ (n+1)δs=nδ
∫ sr=nδ
(σσ′)(Xδnδ)dr
(W(n+1)δ −Wnδ
δ
)2ds+ EC,δn
where
EC,δndef=
∫ (n+1)δs=nδ
∫ sr=nδ
{(σσ′)(Xδr )− (σσ′)(Xnδ)
}dr
(W(n+1)δ −Wnδ
δ
)2ds
By (70), we have that
|EC,δn | ≤ K|W(n+1)δ −Wnδ|3.Next note that (
W(n+1)δ −Wnδ)2
= 2
∫ (n+1)δs=nδ
(Ws −Wnδ)dWs + δ.
Thus ∫ (n+1)δs=nδ
∫ sr=nδ
(σσ′)(Xδnδ)dr
(W(n+1)δ −Wnδ
δ
)2ds =
1
2(σσ′)(Xδnδ)δ + ED,δn
where
ED,δndef=
1
2(σσ′)(Xδnδ)
∫ (n+1)δs=nδ
(Ws −Wnδ)dWs.
Combining things together, we have that
(72)
Xδt = x◦ +
∫ ts=0
b(Xδs )ds+∑
0≤n≤bt/δc
∫ (n+1)δs=nδ
σ(Xδnδ)dWs +1
2
∑0≤n≤bt/δc
∫ (n+1)δs=nδ
(σσ′)(Xδnδ)ds
+ EA,δt +∑
0≤n≤bt/δc
{EB,δn + EC,δn + ED,δn
}.
54
-
We want this to look like
(73) Xt = x◦ +
∫ ts=0
b(Xs)ds+
∫ ts=0
σ(Xs)dWs +1
2
∫ ts=0
(σσ′)(Xs)ds.
Let’s rewrite some of the terms in (72). We have that∑0≤n≤bt/δc
∫ (n+1)δs=nδ
σ(Xδnδ)dWs =
∫ ts=0
σ(Xδτδ(s))dWs + EE,δt
where
EE,δt = σ(Xδτδ(t)){Wt −Wτδ(t)
};
then
|EE,δt | ≤ K|Wt −Wτδ(t)|.We also have that∑
0≤n≤bt/δc
∫ (n+1)δs=nδ
1
2(σσ′)(Xδnδ)ds =
∫ ts=0
1
2(σσ′)(Xδs )ds+ E
F,δt + E
G,δt
where
EF,δtdef=
∫ ts=τδ(t)
1
2(σσ′)(Xδτδ(s))ds
EG,δtdef=
∫ ts=0
1
2
{(σσ′)(Xδτδ(s))− (σσ
′)(Xδs )}ds.
We note that
|EF,δ| ≤ Kδ
|EG,δt | ≤ K∫ ts=0
{δ + |Ws −Wτδ(s)|
}ds
We now have that
Xδt = x◦ +
∫ ts=0
b(Xδs )ds+
∫ ts=0
σ(Xδτδ(s))dWs +
∫ ts=0
1
2(σσ′)(Xδs )ds+ Ẽδt
where
Ẽδtdef= EA,δt +
∑0≤n≤bt/δc
{EB,δn + EC,δn + ED,δn
}+ EE,δt + E
F,δt + E
G,δt
Let’s compare the evolution of X with that of (73). We have that
Xδt −Xt =∫ ts=0
{B(Xδs )−B(Xs)
}ds+
∫ ts=0
{σ(Xδτδ(s))− σ(Xs)
}dWs + Ẽδt
where
B(x)def= b(x) +
1
2(σσ′)(x).
Motivated by our calculations on existence and uniqueness of SDE’s, we now have that
E[Xδt −Xt|2] ≤ Kt∫ ts=0
E[|Xδs −Xs|2]ds+K∫ ts=0
E[|Xδτδ(s) −Xs|2]ds+KE[|Ẽδt |2]
≤ Kt∫ ts=0
E[|Xδs −Xs|2]ds+K∫ ts=0
E[|Xδs −Xs|2]ds+K∫ ts=0
E[|Xδs −Xδτδ(s)|2]ds
+KE[|Ẽδt |2]Let’s finally bound the errors. We have that∫ t
s=0
E[|Xδs −Xδτδ(s)|2]ds ≤ K
∫ ts=0
{δ2 + E[|Ws −Wτδ(s)|
2]}ds ≤ K
∫ ts=0
{δ2 + δ
}ds ≤ Ktδ.
55
-
We also have that
E[|EA,δt |2] ≤ KE[|Wt −Wτδ(t)|2] ≤ Kδ
E[|EE,δ|2] ≤ KE[|Wt −Wτδ(t)|2] ≤ Kδ
E[|EF,δ|2] ≤ Kδ2
E[|EG,δt |2] ≤ Kt∫ ts=0
{δ2 + E[|Ws −Wτδ(s)|
2]}ds ≤ Kt2δ
We also have that
E
∑n≤bt/δc
EB,δn
2 ≤ Kbt/δc ∑
n≤bt/δc
E[|W(n+1)δ −Wnδ|2]δ2K (bt/δc)2δ3 ≤ Kt2δ
E
∑n≤bt/δc
EC,δn
2 ≤ Kbt/δc ∑
n≤bt/δc
E[|W(n+1)δ −Wnδ|6]K (bt/δc)2δ3 ≤ Kt2δ
Finally,
E
∑n≤bt/δc
ED,δn
2 ≤ KE
(∫ τδ(t)s=0
(Ws −Wτδ(s))dWs
)2 = K ∫ τδ(t)s=0
E[(Ws −Wτδ(s))2]ds ≤ Kδ.
To summarize, we have
E[Xδt −Xt|2] ≤ Kt∫ ts=0
E[|Xδs −Xs|2]ds+K(1 + t)δ
Gronwall’s inequality then implies that
E[Xδt −Xt|2] ≤ K exp [K(1 + t)t] (1 + t)δ.Letting δ ↘ 0, we get (69).
56
-
CHAPTER 15
Euler-Maruyama approximations
Let’s now consider the stochastic differential equation
(74)dXt = b(Xt)dt+ σ(Xt)dWt 0 ≤ t ≤ TX0 = x◦
where b and σ are “nice” function and x◦ is some initial condition. Our choice of the time horizon T isarbitrary.
A natural way to approximate the solution X of (74) is via the Euler-Maruyama scheme. Fix δ > 0 anddefine
(75)Xδ(n+1)δ = X
δnδ + b(Xnδ)δ + σ(Xnδ){W(n+1)δ −Wnδ}
Xδ0 = x◦
We want to compare Xδnδ to Xnδ. To do so, let’s convert Xδ into an SDE. Define τδ(t)
def= bt/δcδ. Let
X̂δ be the solution of the ODE
(76)dX̂δt = b(X̂
δτδ(t)
)dt+ σ(X̂δτδ(t))dWt 0 ≤ t ≤ T
X̂δ0 = x◦
Then Xδnδ = X̂δnδ
To compare X and X̂δ, we have
Xt − X̂δt =∫ ts=0
{b(Xs)− b(X̂δs )}ds+∫ ts=0
{σ(Xs)− f(X̂δs )}dWs
=
∫ ts=0
{b(Xs)− b(X̂δs )}ds+∫ ts=0
{σ(Xs)− σ(X̂δs}dWs
+
∫ ts=0
{b(X(N)s )− b(X̂δs )}ds+∫ ts=0
{σ(X̂δs )− σ(X̂δs )}dWs
Thus
E[∣∣∣Xt − X̂δt ∣∣∣2] ≤ Kt∫ t
s=0
E[∣∣∣Xs − X̂δs ∣∣∣2] ds+Kt∫ t
s=0
E[∣∣∣X̂δs − X̂δτδ(s)∣∣∣2] ds
SinceX̂δt = X̂
δτδ(t)
+ b(Xδτδ(t)))(t− τδ(t)) + σ(Xδτδ(t)
)(Wt −Wτδ(t)
)we have that
E[∣∣∣X̂δt = X̂δτδ(t)∣∣∣2] ≤ K {δ2 + δ} .
Thus
E[∣∣∣Xt − X̂δt ∣∣∣2] ≤ K exp [Kt2] δ.
We thus have that
limδ↘0
sup0≤t≤T
E[∣∣∣Xt − X̂δt ∣∣∣2] = 0.
57
-
CHAPTER 16
Large Deviations
Suppose that that {Xε}ε∈(0,1) is a collection of R-valued random variables such that, for some f ∈C2[0, 1],
(77) P{Xε ∈ A} = cε∫x∈A
exp
[−1εf(x)
]dx
for all A ∈ B[0, 1], where
cεdef=
{∫x∈A
exp
[−1εf(x)
]dx
}−1with limN→∞ ε ln cε = 0.
For any A ∈ B[0, 1],limε→0
ε lnP{Xε ∈ A} ≤ − infx∈Ā
f(x)
limε→0
ε lnP{Xε ∈ A} ≥ − infx∈A◦
f(x).
Let L be Lebesgue measure on ([0, 1],B[0, 1]). The upper bound is simple;
P{Xε ∈ A} ≤ cεL(A) exp[−1ε
infx∈A
f(x)
]≤ cε exp
[−1ε
infx∈Ā
f(x)
].
To get the lower bound, fix x∗ ∈ A◦ and δ > 0. Define O def= {x ∈ [0, 1] : f(x) > f(x∗)− δ} ∩A◦. Since f iscontinuous, O is open (in the topology [0, 1] inherits from R) and thus L(O) > 0. Hence
P{Xε ∈ A} ≥ cεL(O) exp[−1ε{f(x∗) + δ}
].
This gives the lower bound by taking ε→ 0 and then δ → 0, and they varying x∗ over A◦.Our basic setup is as follows. We have a collection {Xε}ε∈(0,1) of random variables which take values in
some Polish space X. We will see that the following is the “correct” way to think about exponentially rareevents.
Definition 0.3. We say that {Xε}ε∈(0,1) has a large deviations principle with rate function I : X →[0,∞] if
(1) For each s ≥ 0, Φ(s) def= {x ∈ X : I(x) ≤ s} is compact.(2) For each closed subset F of X,
limε→0
ε lnP{Xε ∈ F} ≤ − infx∈F
I(x)
(3) For each open subset G of X,
limε→0
ε lnP{Xε ∈ G} ≥ − infx∈G
I(x)
We will see why this is correct in a moment. It turns out, however, that there is an equivalent definition fora large deviations principle. We here let d be the metric on X.
Definition 0.4. We say that {Xε}ε∈(0,1) has a large deviations principle with rate function I : X →[0,∞] if
(1) For each s ≥ 0, Φ(s) def= {x ∈ X : I(x) ≤ s} is compact.
59
-
(2) For each s ≥ 0 and each δ > 0,
limε→0
ε lnP{d(Xε,Φ(s)) ≥ δ} ≤ −s
(3) For every x∗ ∈ X and every δ > 0,
limε→0
ε lnP{d(Xε, x∗) < δ} ≥ −I(x∗)
Let’s now prove a large deviations result for Brownian motion. Fix a finite horizon T > 0 and define
C0[0, T ] = {ϕ ∈ C[0, T ] : ϕ(0) = 0} for each ε ∈ (0, 1), define Xε(t)def= εWt for all t ∈ [0, T ]; then Xε is a
C0[0, T ]-valued random variable. For each ϕ ∈ C0[0, T ], define
(78) I(ϕ)def=
{12
∫ T0
(ϕ̇(s))2ds if ϕ is absolutely continuous and φ(0) = 0
∞else;
We claim that εW has a large deviations principle in C0[0, T ] with rate functional given by (78).
Lemma 0.5. For each s ≥ 0, the level sets {ϕ ∈ C0[0, T ] : I(ϕ) ≤ s} of I are compact in C0[0, T ].
Proof. If I(ϕ) ≤ s, then for any 0 ≤ t1 < t2 ≤ T ,
|ϕ(t2)− ϕ(t1)| =∣∣∣∣∫ t2s=t1
ϕ̇(s)ds
∣∣∣∣ = t2 − t1 ∣∣∣∣ 1t2 − t1∫ t2s=t1
ϕ̇(s)ds
∣∣∣∣≤ t2 − t1
√1
t2 − t1
∫ t2s=t1
|ϕ̇(s)| ds ≤√t2 − t1
√2s.
Thus Φ(s) is compact. We thus need to show that Φ(s) is closed. This follows from the fact that
(79) I(ϕ) = sup
12n∑j=0
|ϕ(tj+1)− ϕ(tj)|2
tj+1 − tj| 0 = t0 < t1 · · · < tn = 1
.�
Let’s next prove the lower bound. We use a measure change.
Lemma 0.6. For any ϕ ∈ C0[0, 1] and δ > 0,
limε↘0
ε2 lnP {‖Xε − ϕ‖C < δ} ≥ −I(ϕ)
Proof. The result is obvious if I(ϕ) =∞. Assume that I(ϕ)
-
so for ε > 0 sufficiently small, P̃ε{‖Xε‖C < δ} is positive. Thus we can use Jensen’s inequality to calculatethat
Ẽε[χ{‖Xε‖C
-
Thus
P{‖X̃n,ε −Xε‖C ≥ δ
}≤ P
{sup
0≤k≤n−1sup
kn≤t≤
k+1n
∣∣∣Wt −W kn
∣∣∣ ≥ δ2ε
}
≤n−1∑k=0
P
{sup
kn≤t≤
k+1n
∣∣∣Wt −W kn
∣∣∣ ≥ δ2ε
}≤ nP
{sup
0≤t≤ 1n|Wt| ≥
δ
2ε
}
≤ nP{
sup0≤t≤1
|Wt| ≥δ√n
2ε
}≤ 2n exp
[−δ
2n
8ε2
].
This completes the proof. Thus for all n ∈ N, ε > 0, and δ > 0,
P{‖X̃n,ε −Xε‖C ≥ δ
}≤√
2
πn exp
[−δ
2n
8ε2
]We can now prove the upper bound.
Lemma 0.7. For s ≥ 0 and δ > 0,
limε↘0
ε lnP {dist(Xε,Φ(s)) ≥ δ} ≤ −s.
Proof. We have that
P {dist(Xε,Φ(s)) ≥ δ} ≤ P{‖Xε − X̃n,ε‖ ≥ δ
2
}+ P
{dist(X̃n,ε,Φ(s)) ≥ δ
2
}≤ 2n exp
[− δ
2n
32ε2
]+ P
{I(X̃n,ε
)≥ s}.
We note that
I(X̃n,ε
)=nε2
2
n−1∑k=1
∣∣W(k+1)/n −Wk/n∣∣2We thus have that for α > 0
P{I(X̃n,ε
)≥ s}≤ P
{1− αε2
I(X̃n,ε
)≥ s(1− α)
ε2
}≤ exp
[−s(1− α)
ε2
]E[exp
[(1− α)ε2
I(X̃n,ε
)]]= exp
[−s(1− α)
ε2
]E
[n−1∏k=0
exp
[n(1− α)
2
(W(k+1)/n −Wk/n
)2]]
= exp
[−s(1− α)
ε2
](√n
2π
∫z∈R
exp
[−αnz
2
2
]dz
)n= exp
[−s(1− α)
ε2
]α−n/2.
Collect things together. �
Theorem 0.8. We have that {Xε}ε>0 has an LDP in C0[0, 1] with rate functional I as in (78), whichis equivalently written as
(80) I(ϕ) =
{12
∫ 1s=0
(ϕ̇(s))2ds if ϕ̇ ∈ L2
∞ else.
Proof. Collect the above calculations together �
Exercises(1) Show that Definitions 0.3 and 0.4 are equivalent.(2) We here study (79).
62
-
(a) Suppose that 0 = t0 < t1 · · · < tn = T . Show thatn∑j=0
|ϕ(tj+1)− ϕ(tj)|2
tj+1 − tj≤∫ 1t=0
(ϕ̇(t))2dt.
(b) Fix an open subset O of [0, 1]. Show that there is a {ϕn}n∈N ⊂ C([0, 1]; [0, 1]) such that ϕn ↗ χO.Hint: construct the ϕn using the distance function to [0, 1] \ O.
(c) Since Lebesgue measure on ([0, T ],B[0, T ]) is regular, show that for any A ∈ B[0, T ] and any δ > 0,there is a ϕ ∈ C([0, 1]; [0, 1]) such that∫ T
t=0
|χA(t)− ϕ(t)|2dt < δ
(d) Assume that ψ ∈ L2[0, T ]. Show that for δ > 0, there is a ψ∗ ∈ C[0, T ] such that∫ Tt=0
|ψ(t)− ψ∗(t)|2 dt < δ
(e) Show that if ϕ ∈ C1[0, T ], then
I(ϕ) ≤ sup
12n∑j=0
|ϕ(tj+1)− ϕ(tj)|2
tj+1 − tj| 0 = t0 ≤ t1 · · · ≤ tn = T
.(f) Show that if I(ϕ)
-
CHAPTER 17
Malliavin Calculus
Consider the SDE
dXt = σ(Xt)dWt
X0 = x◦
Assume that σ ∈ C∞b (R) (i.e., σ and all of its derivatives are uniformly bounded) and that there is a σ◦ > 0such that σ(x) ≥ σ◦ for all x ∈ R. We want to show that for each t > 0, X has a density ; i.e, that there is ameasurable map pt : R→ [0,∞) such that
P{Xt ∈ A} =∫x∈A
pt(x)dx. A ∈ B(R)
This is highly nontrivial. Let’s first of all cast our interest in a different way.
Proposition 0.9. Assume that there is a K > 0 such that
(81) |E[f ′′(Xt)]| ≤ K supx∈R|f(x)|
for every f ∈ C1b (R). Then pt exists.
Proof. Define
ϕ(θ)def= E [fθ(Xt)] θ ∈ R
where
fθ(x)def= exp
[√−1θx
]x ∈ R
for all θ ∈ R (i.e., ϕ is the characteristic function of Xt). if (81) holds, we have that
θ2 |ϕ(θ)| = E[∣∣f ′,′θ (Xt)∣∣] ≤ K sup
x∈R|fθ(x)| = K.
Thus
|ϕ(θ)| ≤ K1 + θ2
, θ ∈ R
so ϕ is integrable and
pt(x)def=
1
2π
∫θ∈R
e−√−1θxϕ(θ)dθ
is well-defined. For any bounded, integrable, and continuous function g, we then have that
E[g(X)] = limε→0
∫x∈R
g(x)
{1
2π
∫θ∈R
exp[−ε
2‖θ‖2 −
√−1θx
]ϕ(θ)dθ
}dx
=
∫x∈R
g(x)pt(x)dx
The result follows. �
Let’s see how to get this sort of inequality. Fix a bounded predictable function ξ and define
dXεt = σ(Xεt ) {dWt − εξtdt}
Xε0 = x◦
65
-
In other words,
(82) Xεt = x◦ +
∫ ts=0
σ(Xεs )dWs − ε∫ ts=0
σ(Xεs )ξsds.
Let’s also define
G(ε)def= exp
[ε
∫ ts=0
ξsdWs −ε2
2
∫ ts=0
ξ2sds
].
By Girsanov’s theorem, we then have that
(83) E [f(Xεt )G(ε)] = E [f(Xt)] .Let’s differentiate this. Differentiating (82) at ε = 0, we get
(84) Ξt =
∫ ts=0
σ′(Xs)ΞsdWs −∫ ts=0
σ(Xs)ξsds
We will explicitly solve this in a moment. Returning to (83), we have that
E[f ′(Xt)Ξt] = E[−f(Xt)
{∫ ts=0
ξsdWs
}].
This looks promising if we can do something like bound Ξ from below (this is not really right, but it pointsour thoughts in useful directions).
Let’s now solve for Ξ. Define
Mtdef= exp
[∫ ts=0
σ′(Xs)dWs −1
2
∫ ts=0
(σ′(Xs))2ds
].
Then
(85) Ξt =
∫ ts=0
MtM−1s σ(Xs)ξsds.
Note that M satisfies
dMt = σ′(Xt)MtdWt
M0 = 1.
We haven’t yet chosen ξ. We want to choose ξ to get something like a lower bound on Ξ. This makessense if we choose
ξs = M−1s σ(Xs).
Then
Ξt = Mt
∫ ts=0
(M−2s σ
2(Xs))2ds.
Let’s define
Ntdef= M−1t = exp
[−∫ ts=0
σ′(Xs)dWs +1
2
∫ ts=0
(σ′(Xs))2ds
];
then X and N satisfy the joint SDE
dXt = σ(Xt)dWt
X0 = x◦
dNt = −σ′(Xt)NtdWt + (σ′(Xt))2NtdtN0 = 1
Let’s now put ε’s in a number of places. Consider the joint SDE
dXεt = σ(Xεt ) {dWt − εNεt σ(Xεt )dt}
Xε0 = x◦
dNεt = −σ′(Xεt )Nεt dWtNε0 = 1
66
-
For ε > 0, define
W εtdef= Wt − ε
∫ ts=0
Nεsσ(Xεs )ds 0 ≤ t ≤ T
If F is any sufficiently nice function of the Brownian motion W , define
DF (W )def= lim
ε↘0ε−1 {F (W ε)− F (W )} .
We can but won’t be precise about this. In any case, we have that
DXt = Ξt = N−1t
∫ ts=0
(Nsσ(Xs))2ds.
Thus if f is differentiable,D(f ◦Xt) = f ′(Xt)Ξt.
Also note that D satisfies Liebniz’s rule.Let’s also define
δtdef=
∫ ts=0
Nsσ(Xs)dWs.
For any sufficiently nice random variable F , we then have that
E[DF ] = −E [Fδt] .Thus
E[f ′(Xt)F ] = E[D(f ◦Xt)
F
Ξt
]= E
[D
((f ◦Xt)
F
Ξt
)]− E
[f(Xt)D
F
Ξt
]= −E
[f(Xt)
FδtΞt
]− E
[f(Xt)D
F
Ξt
]= −E [f(Xt)D∗F ]
where
D∗Fdef= D
F
Ξt+FδtΞt
.
Thus, defining 1 to be the identically one function, we have that
E[f ′′(Xt)] = −E [f ′(Xt)D∗1] = E [f(Xt)D∗D∗1]We have (81) if
(86) E [|D∗D∗1|]
-
CHAPTER 18
Stochastic Averaging
Consider the 2-dimensional SDE
dθεt =1
εdt
dZεt = σ(θεt )dWt
θε0 = 0
Zε0 = 0.
We require here that σ be 1-periodic and smooth. We want to think of this as a diffusion on a cylinder,with θε being the angular variable and Zε being the axial variable. We want to understand the behavior ofthis SDE as ε↘ 0 This is a 2-dimensional Markov process, where θε is the fast variable and Zε is the slowvariable. We want to show that the slow variable Zε has a Markovian limit.
We have arranged our problem to be as simple as possible; we have that
Zεt =
∫ ts=0
σ(s/ε)dWs.
We want to use machinery which is fairly general.
First, we claim that the laws of the Zε’s are tight. Define σ̄def= ‖σ‖C . By making a time change, we have
that
Zεt = Vε∫ tr=0
σ2(r/ε)dr
for some ε-dependent Brownian motion V ε. Thus for any δ > 0 and T > 0,
limη↘0
P
sup0≤s≤t≤T|t−s|≤η
|Zεt − Zεs | ≥ η
≤ P sup0≤s′≤t′≤T σ̄|t−s|≤σ̄η
|V εt − Zεs | ≥ η
= 0.Thus the laws of the Zε’s have at least one convergent subsequence.
Let’s next identify the limit. We claim that limε↘0 Zε = Z, where Zt = σ̄Wt with
σ̄def=
{∫ 1θ=0
σ2(θ)dθ
}1/2.
Namely, we want to show that for any f ∈ C2b (R) and any 0 ≤ s0 ≤ s1 ≤ sn ≤ s ≤ t and any {φn}Nn=1 ⊂ C(R),
limε↘0
E
[{f(Zεt )− f(Zεs )−
∫ tr=s
σ̄2
2f̈(Zεr )dr
} N∏n=1
φn(Zεsn)
]= 0.
Ito’s formula tells us that
E
[{f(Zεt )− f(Zεs )−
∫ tr=s
σ2(r/ε)
2f̈(Zεr )dr
} N∏n=1
φn(Zεsn)
]= 0.
Thus we really want to show that
(87) limε↘0
E
[∫ tr=s
{σ2(r/ε)− σ̄2
}f̈(Zεr )dr
N∏n=1
φn(Zεsn)
]= 0.
69
-
To show (87), define
Φ(t)def=
∫ tr=0
{σ2(r)− σ̄2
}dr.
We note that Φ is bounded and differentiable. We have that
εΦ(t/ε)f ′′(Zεt ) =
∫ tr=0
{σ(r/ε)− σ̄2
}f ′′(Zεr )dr
+ε
2
∫ tr=0
Φ(r/ε)σ2(r/ε)f (4)(Zεr )dr + ε
∫ tr=0
Φ(r/ε)σ(r/ε)f (3)(Zεr )dWr.
In other words,
E
[∫ tr=s
{σ2(r/ε)− σ̄2
}f̈(Zεr )dr
N∏n=1
φn(Zεsn)
]
= εE
[{Φ(t/ε)f ′′(Zεt )− Φ(s/ε)f ′′(Zεs )}
N∏n=1
φn(Zεsn)
]
− ε2E
[∫ tr=s
Φ(r/ε)σ2(r/ε)f (4)(Zεr )drN∏n=1
φn(Zεsn)
].
This gives us the claim.
70
Chapter 1. Brownian MotionExercises
Chapter 2. MartingalesExercises
Chapter 3. Stochastic IntegrationExercises
Chapter 4. SDE'sExercises
Chapter 5. Ito's formulaExercises
Chapter 6. Feynman-KacExercises
Chapter 7. CIR ProcessesExercises
Chapter 8. Girsanov's theoremExercises
Chapter 9. The martingale problem and Levy's characterization of Brownian motion1. Poisson processesExercises
Chapter 10. Martingales as time-changed Brownian motionsExercises
Chapter 11. FilteringExercises
Chapter 12. Option Pricing1. BondsExercises
Chapter 13. Feller ConditionsExercises
Chapter 14. Wong-ZakaiChapter 15. Euler-Maruyama approximationsChapter 16. Large DeviationsExercises
Chapter 17. Malliavin CalculusExercises
Chapter 18. Stochastic Averaging