small-time asymptotics of stopped lévy bridges and...
TRANSCRIPT
Small-time asymptotics of stopped Lévy bridges
and simulation schemes with controlled bias
José E. Figueroa-López1
1Department of Statistics
Purdue University
Computational Finance Seminar
Purdue University
January 17th, 2013
(Joint work with Peter Tankov from Paris 7, France)
Outline
1 Motivation: Barrier Options
2 Small-time asymptotics for stopped Lévy bridges
Formulation of the Problem
The Main Result
3 Monte Carlo Methods for stopped Lévy processes
Bridge MC Simulation
Adaptive simulation with bias control
Numerical Illustration
4 Conclusions
Barrier options
• Set-up:
Market consisting of a money market account with constant interest rate r
and a risky asset with prices process {St}0≤t≤T ;
• European Barrier Options:Option whose payoff at maturity is triggered or cancelled when the stockprice “hits" a certain domain of price values.
• Up-and-in call: Given a “barrier" value B > S0,
X =
{(ST − K )+ , if sup0≤t≤T St ≥ B,
0, otherwise,= (ST − K )+ 1{supt≤T St≥B}.
• Up-and-out call: Given a “barrier" value B > S0,
X = (ST − K )+
(1− 1{supt≤T St≥B}
)= (ST − K )+ 1{supt≤T St<B}.
• Down-and-out call: Given a “barrier" value A < S0,
X = (ST − K )+
(1− 1{inft≤T St≤A}
)= (ST − K )+ 1{inft≤T St>A}.
Barrier options. Cont...
• Payoff of a general double-barrier “out-type" barrier option:
X := f (ST )1{St∈(A,B), for all t∈[0,T ]} (0 ≤ A < S0 < B ≤ ∞)
= F (XT ) 1{Xt∈(a,b), for all t∈[0,T ]}
= F (XT ) 1{τ>T},
where
• Xt = lnSt
S0, (Log-Return Process),
• F (x) = f (S0ex ),
• a = ln(A/S0), b = ln(B/S0), (−∞ ≤ a < 0 < b ≤ ∞),
• τ := inf{t > 0 : Xt /∈ (a,b)}, (Exit or Hitting Time).
Arbitrage-Free Pricing
1 (FTF) Under arbitrage-freeness, the time-0 premium of a European
option with payoff X is given by the expected discounted payoff under a
risk-neutral measure Q:
Π (X ; T ) = EQ (e−rTX)
= EQ (e−rT F (XT )1{τ>T}).
2 In the Black-Scholes model, where
St := S0eµt+σWt , Xt = µt + σWt ,
under the real world probability P, there exists a unique risk-neutral
measure Q. Under Q,
St := S0eµQt+σWt , Xt = µQt + σWt , µQ := r − σ2
2.
3 The barrier premium is then
Π(X ; T ) = EQ (e−rT F (µQT + σWT ) 1{µQt+σWt∈(a,b), for all t≤T}).
4 There is no closed formula. Need to rely on numerical methods.
Traditional (sequencial) Monte Carlo (MC) Method
Algorithm:
1 Time Discretization:
0 = t0 < · · · < tn = T (e.g., uniform sampling ti = iT/n);
2 Sample simulation: Xt1 , . . . ,Xtn
3 Approximation of the exit time: τn := min {tk : Xtk /∈ (a,b)};
4 Evaluation of the approximate final discounted payoff:
Y := e−rT F (XT )1{τ>T} ≈ Y := e−rT F (Xtn )1{τn>T}.
5 Repeat (1)-(4) to generate m independent copies of Y: Y1, . . . , Ym.
6 MC estimate: C(0; T ,F ) := 1m
∑mi=1 Yi .
Error analysis:
There are two errors: Discretization and statistical error.
The former is due to approximation τn ≈ τ and the latter is due to the
approximation 1m
∑mi=1 Yi ≈ EQ (Y).
Traditional (sequencial) Monte Carlo (MC) Method
Drawback of sequential MC for stopped processes:
1 Highly biased due to the possibility of exiting the interval (a,b)
between sampling observations.
2 The discretization error is of order 1√n for diffusions (Asmussen,
Glyn, and Pitman; 1995) and diffusions with finite jump activity
(Dia and Laberton; 2007);
3 Unknown for general Lévy processes, but it is expected to be
much higher for infinite jump activity Lévy processes.
Improved MC for Markov processes (Baldi, 1995)
1 Suppose one can compute the exit probability of the “bridge" process:
p(x , y , t) := P (Xu /∈ (a,b), for some u ∈ [s, s + t ] |Xs = x ,Xs+t = y).
2 By the Markov Property, for any fixed times 0 = t0 < · · · < tn = T ,
E[F (XT )1{Xu∈(a,b),∀u∈[0,T ]}
]= E
[F (XT )
n−1∏i=0
1{Xu∈(a,b),∀u∈(ti−1,ti ]}
]
= E
[F (XT )E
[n−1∏i=0
1{Xu∈(a,b),∀u∈(ti−1,ti ]}
∣∣∣∣∣Xt1 , . . . ,Xtn
]]
= E[
F (XT )n−1∏i=0
E[1{Xu∈(a,b),∀u∈(ti−1,ti ]}
∣∣Xti−1 ,Xti
]]
= E[F (XT )
n−1∏i=0
(1− p(Xti ,Xti+1 , ti+1 − ti )
) ].
Improved MC Method. Cont...
Algorithm:
1 Simulation of a discrete skeleton of the process: {(t1,Xt1 ), . . . , (tn,Xtn )}
2 Compute the (discounted) conditional expected payoff given the skeleton:
Y := e−rT F (Xtn )n−1∏i=0
(1− p(Xti ,Xti+1 , ti+1 − ti )
).
3 Repeat (1)-(2) to generate m independent copies: Y1, . . . , Ym.
4 MC estimate: C(0; T ,F ) := 1m
∑mi=1 Yi .
Advantage:
There is no discretization error and the only error is statistical (which
is of order n−1/2 by the CLT).
The Problem
Important question: How to find the exit probability p(x , y , t)?
• Closed form available for the Black-Scholes model Xt = µt + σWt ;
• Small-time approximation known for diffusions (Baldi, 1995)
dXt := µ(t ,Xt )dt + σ(t ,Xt )dWt ;
• Unknown approximation for processes with jumps.
Key problem:
Given a domain (a,b) with −∞ ≤ a < 0 < b ≤ ∞ and initial and final points
x , y ∈ (a,b), we want to characterize the small-time asymptotics of the exit
probability for Lévy bridges:
p(x , y , t) := P (Xu /∈ (a,b), for some u ∈ [s, s + t ]|Xs = x ,Xs+t = y) ,
where X := (Xt )t≥0 is a general “Lévy Model".
Exponential Lévy Model
1 From Black-Scholes to a exponential Lévy model:The log-return process Xt := ln St
S0is a B.M. with drift σWt + µt :
• X0 = 0;• X has independent increments: Xt1 −Xt0 , . . . ,Xtn −Xtn−1 for any t0 < · · · < tn;• X has stationary increments: The distribution of Xt+s − Xs is indep. of s;• The paths of X are continuous• The distribution of Xt+s − Xs is Gaussian with mean µt and variance σ2t .
2 In a Léve model, the distribution law of X is characterized by its Lévy
triplet (σ, ν, µ) via the Lévy-Khintchine formula:
E[eiu(Xs+t−Xs)
]= et
(iuµ−σ2u2
2 +∫
[eiux−1−iux1|x|≤1]ν(dx)).
• The Lévy measure ν governs the intensity of jumps;• σ is the volatility of the continuous component;• µ is related to a deterministic drift or expected rate of grow.
Exponential Lévy Model
1 From Black-Scholes to a exponential Lévy model:The log-return process Xt := ln St
S0is a Lévy process:
• X0 = 0;• X has independent increments: Xt1 −Xt0 , . . . ,Xtn −Xtn−1 for any t0 < · · · < tn;• X has stationary increments: The distribution of Xt+s − Xs is indep. of s;• The paths of X may have discontinuities but only of “first type" (jump type);• In principle, there is no restriction on the distribution of Xt+s − Xs.
2 In a Léve model, the distribution law of X is characterized by its Lévy
triplet (σ, ν, µ) via the Lévy-Khintchine formula:
E[eiu(Xs+t−Xs)
]= et
(iuµ−σ2u2
2 +∫
[eiux−1−iux1|x|≤1]ν(dx)).
• The Lévy measure ν governs the intensity of jumps;• σ is the volatility of the continuous component;• µ is related to a deterministic drift or expected rate of grow.
Exponential Lévy Model
1 From Black-Scholes to a exponential Lévy model:The log-return process Xt := ln St
S0is a Lévy process:
• X0 = 0;• X has independent increments: Xt1 −Xt0 , . . . ,Xtn −Xtn−1 for any t0 < · · · < tn;• X has stationary increments: The distribution of Xt+s − Xs is indep. of s;• The paths of X may have discontinuities but only of “first type" (jump type);• In principle, there is no restriction on the distribution of Xt+s − Xs.
2 In a Léve model, the distribution law of X is characterized by its Lévy
triplet (σ, ν, µ) via the Lévy-Khintchine formula:
E[eiu(Xs+t−Xs)
]= et
(iuµ−σ2u2
2 +∫
[eiux−1−iux1|x|≤1]ν(dx)).
• The Lévy measure ν governs the intensity of jumps;• σ is the volatility of the continuous component;• µ is related to a deterministic drift or expected rate of grow.
Important Lévy models
1 (σ, 0, µ) correspond to the Brownian Motion with drift Xt = σWt + µt ;
2 (σ, ν, µ) where ν(dx) = s(x)dx with∫R s(x)dx <∞ correspond to
Xt = µt + σWt + Jt ,
where Jt is a compound Poisson process with intensity of jumpsλ :=
∫s(x)dx and jump density p(x) := s(x)
λ .
• If p(x) is Normal, then X is known as the Merton model;• If p(x) is double exponential (Laplace distribution), then the model is called
the Kou model.
Short-time asymptotics of stopped Lévy bridges
1 Problem:
Characterize the small-time asymptotics of the exit probability:
p(x , y , t) := P (∃u ∈ [s, s + t ] : Xu /∈ (a,b)|Xs = x ,Xs+t = y) .
2 Note that
P (Xu /∈ (a,b), for some u ∈ [s, s + t ]|Xs = x ,Xs+t = y)
= P (Xu /∈ (a− x ,b − x), for some u ∈ [0, t ]|X0 = 0,Xs+t = y − x)
So, it suffices to study the small-time asymptotics of the exit probability:
P (Xu /∈ (a,b), for some u ∈ [0, t ]|Xt = y ,X0 = 0)
= P (τ ≤ t |Xt = y ,X0 = 0), where
τ := inf {u ≥ 0 : Xu /∈ (a,b)} , y ∈ (a,b), −∞ ≤ a < 0 < b ≤ ∞.
Related results
From now, we suppose
ν(dx) = s(x)dx , for some smooth function s : R\{0} → (0,∞).
Then, the following two assertions hold:
1 For any x > 0,
P (Xt ≥ x) ∼ t∫ ∞
xs(w)dw , (t → 0);
2 [Léandre(1987)]. If ft (x) is the probability density of Xt , then
limt→0
1t
ft (x) = s(x), (x 6= 0).
The Main Result
Theorem. [F-L & Tankov (2012)]
• For a.e. y ∈ (a,b)\{0}, we have, as t → 0,
P (τ ≤ t |Xt = y) =t2
2
∫(a,b)c
s(v)s(y − v)
ft (y)dv + O(t3/2)
=t2
∫(a,b)c
s(v)s(y − v)
s(y)dv + O(t3/2).
• In particular,
p(x , y , t) =t2
∫(a−x,b−x)c
s(v)s(y − x − v)
s(y − x)dv + O(t3/2).
Intuition from the finite jump-intensity case
Consider a compound Poisson process Xt =∑Nt
n=0 ξn with jump
intensityλ = 1:
P(τ ≤ t |Xt = y)δ�1≈ P(∃ u ≤ t : Xu /∈ (a,b); Xt ∈ (y − δ, y + δ))
P(Xt ∈ (y − δ, y + δ)
≈ t2
2× P(ξ1 /∈ (a,b), ξ1 + ξ2 ∈ (y − δ, y + δ))
ft (y)δ
δ→0−→ t2
2
∫(a,b)c
s(x)s(y − x)
ft (y)dx .
Fundamental Reason:
If, during a small time interval, X exits the interval (a,b) and then comes back
to a point y ∈ (a,b), this essentially happens with two large jumps: the first
one takes the process out of (a,b), while a second jumps brings it back.
We show that this logic extends to a large class of infinite jump activity Lévy
processes.
Illustration
- 0.5
0.0
0.5
1.0
1.5
2.0
2.5
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0- 0.5
0.0
0.5
1.0
1.5
2.0
2.5
3.0
0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10
Left: Cauchy bridge when it cross level b = 2 during [0,1]; Right: Cauchy
bridge when it cross level b = 2 during [0,0.1].
Remarkable asymptotic identity.
P (τ ≤ t |Xt = y) ∼ 2P(
Xt/2 /∈ (a,b)∣∣Xt = y
)
Back to the Baldi’s sequential MC Method
Algorithm:
1 Generation of the sample: Xt1 , . . . ,Xtn
(from the distribution of the increments ∆ni X := Xti − Xti−1 );
2 Compute the expected payoff conditional on the discrete skeleton:
X := F (Xtn )n−1∏i=0
(1− p(Xti ,Xti+1 , ti+1 − ti )
).
3 Repeat (1)-(2) to generate m copies of approx. payoffs: X1, . . . , Xm.
4 MC estimate: C(0; T ,F ) := 1m
∑mi=1 Xi .
Proposed solution: Short-time approximation.
p(x , y , t) := P (∃u ∈ [0, t ] : Xu /∈ (a− x ,b − x)|X0 = 0,Xt = y − x)
≈ t2
2
∫(a−x,b−x)c
s(v)s(y − x − v)
ft (y − x)dv =: p(x , y , t), (x 6= y).
Back to the Baldi’s sequential MC Method
Algorithm:
1 Generation of the sample: Xt1 , . . . ,Xtn
(from the distribution of the increments ∆ni X := Xti − Xti−1 );
2 Compute the expected payoff conditional on the discrete skeleton:
X := F (Xtn )n−1∏i=0
(1− p(Xti ,Xti+1 , ti+1 − ti )
).
3 Repeat (1)-(2) to generate m copies of approx. payoffs: X1, . . . , Xm.
4 MC estimate: C(0; T ,F ) := 1m
∑mi=1 Xi .
Proposed solution: Short-time approximation.
p(x , y , t) := P (∃u ∈ [0, t ] : Xu /∈ (a− x ,b − x)|X0 = 0,Xt = y − x)
≈ t2
∫(a−x,b−x)c
s(v)s(y − x − v)
s(y − x)dv =: p(x , y , t), (x 6= y).
Controlling the bias
1 We propose to generate the approximated expected payoff:
X := F (XT )1τ>T ≈ X := F (Xtn )∏n−1
i=0
(1− p(Xti ,Xti+1 , ti+1 − ti )
).
2 The bias is introduced via the error in the approximation
p(x , y , ti+1 − ti ) ≈ p(x , y , ti+1 − ti ),
which in principle improves if ti+1 − ti is small;
3 How to choose a suitable mesh size between sampling times?
4 If we have at our hand an estimate ep(x , y , t) of the approximation error:
|p(x , y , t)− p(x , y , t)| ≤ ep(x , y , t),
one may control the bias by splitting the subinterval [ti , ti+1] into two if
e(Xti ,Xti+1 , ti+1 − ti ) ≥ γ(ti+1 − ti ) for some desired tolerance γ > 0;
5 More suitable with adaptive simulation (i.e. sample more points only
when and where is needed) and Bridge Monte Carlo.
Bridge Monte Carlo
1 Simulate the final value XT from the marginal law ft (·) of Xt with t = T ;
2 Simulate intermediate points using the bridge law:
f brt ( ·| s, x ,u, y) = Law (Xt |Xs = x ,Xu = y) , (s < t < u).
Concretely, to generate X0,X T4,X T
2,X 3T
4,XT , we proceed as follows:
• Simulate XT from fT (·);• Simulate X T
2from f br
T/2 ( ·| 0, 0,T ,XT );
• Simulate X T4
from f brT/4
(·| 0, 0, T
2 ,X T2
);
• Simulate X 3T4
from f brT/4
(·| T
2 ,X T2,T ,XT
);
Advantages:
• The trajectory can be adaptively refined only where and when necessary;
• Variance reduction methods are easy to design by replacing the density
of XT with an important sampling distribution;
Simulation from the Lévy bridge law
1 The Lévy bridge density (the density of Xs+t/2 given Xs = 0,Xs+t = y ) is
f brt/2(x |0,0, t , y) =
ft/2(x)ft/2(y − x)
ft (y).
2 In the case of a unimodal marginal densities ft , for all t > 0,
f brt/2(x |0,0, t , y) ≤
2ft/2(y/2)
ft (y)︸ ︷︷ ︸Rejection rate
× 12(ft/2(x) + ft/2(y − x)
)︸ ︷︷ ︸
f (x), proposal density
3 This lead us to propose a new rejection-based method:
• Simulate X from f (x): X =
{X w.p. 1/2
y − X w.p. 1/2, with X ∼ ft/2(·).
• Simulate U ∼ Unif(0,1), independent from X . If
U ≤ft/2(X )ft/2(y − X )
ft/2(y/2)(
ft/2(X ) + ft/2(y − X )) ,
then accept X ; otherwise, reject and go back to previous step.
Bridge-based MC method with controlled bias
Algorithm
Given x and y, the algorithm simulates a discrete skeleton of a Lévy bridge
X = {(Ti ,XTi )}Ni=0 on [0,∆T ] with T0 = 0, TN = ∆T , X0 = x, X∆T = y s.t.
ep(XT(i) ,XT(i+1),T(i+1) − T(i)) ≤ γ(T(i+1) − T(i)), (1)
where 0 = T(0) < · · · < T(N) are the order “statistics" of {T0, . . . ,TN}Returns N(X ) :=
∏N−1i=0
(1− p(XT(i) ,XT(i+1)
,T(i+1) − T(i))).
FUNCTION N(parameters: x, y, ∆T)
IF x /∈ D OR y /∈ D THEN RETURN 0
IF ep(x , y ,∆T ) ≤ γ∆T THEN RETURN 1− p(x , y ,∆T )
ELSE
Sample X from the bridge distribution X ∆T2|X∆T = y
RETURN N(x , X ,∆T/2)×N(X , y ,∆T/2)
END IF
Illustration
- 3.5
- 3.0
- 2.5
- 2.0
- 1.5
- 1.0
- 0.5
0.0
0.5
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Figure: A typical trajectory simulated by the adaptive algorithm (Cauchy Process). The
algorithm places more points at the parts of the trajectory which are close to the
boundary.
Some important issues
1 The ordered sampling times 0 = T(0) < · · · < T(N) = T are random (and
“non-anticipative") times. Is the decomposition
E[F (XT )1τ>T ] = E[F (XT )
N−1∏i=0
(1− p(XT(i) ,XT(i+1)
,T(i+1) − T(i))) ],
still true?
2 Does the algorithm terminate in finite-time? Note that we require
ep(XT(i) ,XT(i+1),T(i+1) − T(i)) ≤ γ(T(i+1) − T(i)),
for all i = 0, . . . ,N − 1.
3 Does the algorithm attains the desired controlled bias?
Convergence of the algorithm and bias control
Theorem. [F-L & Tankov (2012)]
Suppose that X satisfies one of the conditions:
1 X does not hit points; that is, P(τ{x} <∞) = 0 for all x , where
τ{x} := inf{s > 0 : Xs = x} or, equivalently,∫R<(
11 + ψ(u)
)du =∞,
2 X has finite variation (e.g. Variance Gamma Process).
Also, assume the approximation error satisfies
limt↓0
1t
supx,y∈(a′,b′)
ep(x , y , t) = 0, ∀a′,b′ ∈ (a,b).
Convergence of the algorithm and bias control. Cont...
Theorem. [F-L & Tankov (2012)]
Then, for any T > 0, γ > 0, and F such that E|F (XT )| <∞, we have
(i) The previous adaptive algorithm terminates in finite time a.s.
(ii) The random skeleton X = {(Ti ,XTi )}Ni=0 generated by the above
algorithm satisfies:
|E[F (XT )1{τ>T}]− E[F (XT )× N(X )]| ≤ γE[|F (XT )|].
Error estimate for self-decomposable Lévy processes
Theorem. [F-L & Tankov (2012)]
Fix ε > 0 small enough, let
(i) λε :=∫|x|≥ε s(x)dx , bε := b −
∫ε<|x|≤1 xν(dx), σ2
ε := σ2 +∫|x|≥ε x2ν(dx)
(ii) aε := sup|x|>ε s(x), a′ε := sup|x|>ε |s′(x)|, C(η, ε) :=(
eσ2ε
εη
) ηε
(iii) α := b ∧ |a| and ∆y := (b − y) ∧ (y − a) > 0
For t > 0 small enough, the approx. p(0, y , t) = t2
2
∫(a,b)c
s(v)s(y−v)ft (y) dv is s.t.
|p(0, y , t)− p(0, y , t)| ≤ 1ft (y)
(e−λεtC(∆y/4, ε)t
∆y4ε
{8
∆y+ 2aεt + aελεt2
}+ 2e−λεtaεC(α/2, ε)t1+ α
2ε {1 + tλε}+λ2εaε2
t3
+ aελ−1ε
(1− e−λεt [1 + λεt + (λεt)2/2]
)+ e−λεt t2[2a2
ε + λεa′ε](σεt1/2 +
|bε|2
t)).
Numerical Example 1: Cauchy Process
UniformAdaptiveTrue value5% confidence bound
0.035
0.036
0.037
0.038
0.039
0.040
0.041
0.042
0.043
0.044
100 1000Time
Adaptive discret izat ionUniform discret izat ion
0.0001
0.001
0.01
0.1
1 10 100 1000Time
Bias
Figure: Computation of P(τ > 1) := P[sup0≤s≤1 Xs ≤ 10−2] (a = −∞, b = 0.01, and
F (·) ≡ 1). Left: Values computed by the uniform discretization algorithm (UDA) and
the adaptive algorithm (AA), as function of the computational time (in sec.), for 106
paths. Different points on the graph correspond to different numbers of discretization
times (n) for the UDA (from 256 to 16384) and different values of the tolerance
parameter γ for the AA (from 9 to 9× 10−3). Right: Comparison of the discretization
bias for the uniform discretization and the bias for the adaptive algorithm.
Conclusions and extensions
Main results
• Asymptotics for stopped Lévy bridges with explicitly computable error
bounds;
• Bridge simulation method for general Lévy processes;
• Application to Monte Carlo simulation of stopped Lévy process with
controlled bias via an adaptive bridge Monte Carlo simulation method.
Extensions:
• Simulation of stopping times and overshoots;
• Multidimensional Lévy processes in conÞned domains;
• General Markov jump processes.
For Further Reading I
Figueroa-López & Tankov.
Small-time asymptotics of stopped Lévy bridges and simulation schemes
with controlled bias
Preprint, 2012. Available at Arxiv and at www.stat.purdue.edu/∼figueroa.
Figueroa-Lopez & Houdré.
Small-time expansions for the transition distributions of Lévy processes.
Stochastic Processes and Their Applications, 119:3862-3889, 2009.
Léandre
Densité en temps petit d’un processus de sauts.
Séminaire de probabilités XXI, Lecture Notes in Math. J. Azéma, P.A.
Meyer, and M. Yor (eds), 1987.