squvietna2

Upload: phuong-hoang

Post on 05-Apr-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/31/2019 SquVietna2

    1/65

    2012 Febru ary Joh n Von Neuman n Institute

    Stoehastie calculus applied in Finanee

    This course contains seven chapters after som e prerequisites, 18 hours plus exercises.

    0.1 Introduction, aim of the course

    The purpose is to introduce som e bases ofstochastic calculus to get tools to be applied

    to Finance, Actually, it is supposed that the hnancial market proposes assets, the pricesof them depending on time and hazard. Thus, thev could be modelized bv stochasticprocesses, assuming theses prices are kn vvn in continuous time. Moreover, we supposethat the possible states space, n, is inhnite, that the information is continuously known,that the trading are continuous. T hen , we consider that the model is indexed by timet , t e [0, T] or R+, and we will introduce some stochastic tools for these models.

    Remark that aetually the same tools could be useful in other areas, other than nancial models.

    0.2 Agenda

    (i) Brownian motion : this stochastic process is characterized bv the fact that little incre-ments modelize the noise, the physicalmeasure error.... The existence ofsuch a processisproved in the first chapter, Br vvnian motion is explicitely built, some ofuseful proper-ties are shown.(ii) Stochastic integral: actually It calculus allows to get more sophisticated processesby integration . This integral is defined in second chapter(iii) It formula all vvs to dierentiate functions ofstochastic processes.(iv) Stochastic dierential equations: linear equation goes to Black-Scholes model and afirst example ofdiusion. Then Ornstein-Uhlenbeck equation modelizes more com plicate

    hnancial behaviours.(v) Change of probabilitv measures (Girsanov theorem) and martingale problems willbefifth chapter. Indeed, in t hese hnancial models, we try to set on a probability space whereall the prices could be martingales, so with constant mean ; in such a case, the prices aresaid to be risk neutral, Thus we will get Girsanov theorem and martingale problem.(vi) Representation ofm artingales, complete markets: we introduce the theorem ofmar-tingale representation , meaning that, under convenient hvpotheses, anv F T-m easurable

    Tcom plete markets.(vii) A conclusive chapter apply all these notions to hnancial markets : viable market,complete market, admissible portfolio, optim al portfolio and so on in case of a sm allinvestor. We also look (if time enough) at European options.

    1

  • 7/31/2019 SquVietna2

    2/65

    0.3 Prerequisit es

    probability space, -algebra,Borelian -algebra on R,R .hltration, hltered probabilitv space,random variable,stochastic processes,trajectories: right continuous (cd) left limited (lg),process adapted to a hltration.

    0.4 Some convergences

    DeAn it ion 0 .1. Let Pn series of probability measures on a metric space (E,d ) endowedwith Borelian -agebra B, and P measure on B, The series (Pn) is said to weakly converge to P ifV G Cb(E ), Pn( f ) P ( f).

    DeAnition 0.2. Let (Xn) a series of random variabes on (Qn,An, Pn) taking theirvalues in a metric space (E, d, B). The series (Xn) is said to converge in law to

    X if the series of probabili ty measures (PnX :) Ueakly converges to P X -1 , meaning:Vf eCb(E), Pn(f(Xn)) - P (f(X )).

    - convergence,

    - almost sure convergence,

    - convergence in probability.

    Proposi t ion 0 .3. A most sure convergence yieds probability convergence.

    Propos i t ion 0.4. Lp convergence yields probability convergence.

    - Lebesgue theorems: m onotoneous, bounded convergence.

    - limit sup and limit inf of sets.

    Theorem 0 .5. Fatou: For all series of events (An)

    P(liminf An) < liminf P(An) < limsup P(An) < P(limsup An).n n n n

    T h eorem 0.6. Borel-Cantelli:

    ^ ^ P (A n) < TO ^ P(limsup An) = 0.n

    When the events An are independent and J2n P(An) = TO, then P(limsup An) = 1 .

    2

  • 7/31/2019 SquVietna2

    3/65

    DeAnition 0.7 . amily of random variables {Ua, a A } is u ni ormly in tegrab lewhen

    T heorem 0 .8. The following are equivalent:

    (i) Pamily {Ua,a A} is uniormy integrable,

    (u) supa E[|Ua |] < TO andVe, 3 > 0 : A A etP(A) < ^ E[|Ua |1A] < e.

    RECALL: an a lmost su re ly convergen t ser ies w h ich get a u n i ormly in tegrab leamily , m oreover, converges in L1.

    0.5 Conditional expectation

    DeAnit i on 0.9 . Let X a random variable belonging to L1(Q, A, P ) and B a -agebraincuded, in A. Ep(X/B) is the unique random variabe in L1(B) such that

    Coro lla ry 0 .1 0 . I f X e L2(A), | |X 112= ||Ep(X /B)|2 + | |X - Ep (X /B)|2 .

    0.6 stopping time

    This notion is related to a hltered probability space.

    DeAnit ion 0 .1 1 . A random variabe T : (Q, A, (Ft), p) ^ (R+, B) is a stopp ing t imei fVt R+, the event {u/T(u) < t} G F t.

    Examples :

    - a constant variable is a stopp ing time,

    - let O be an open set in A and X a continuous process, then

    is a stopp ing time, called hitting time.

    DeAnition 0 .12. Let T be a stopping time in fihration F t. The setFt = {A A,A n { u / T < t} Ft is caedstopped at time T.

    DeAnit i on 0.13. The process X.AT is called stopped, process at T , denoted, as X T.

    To (u ) = in f{ t ,Xt(u) O}

    3

  • 7/31/2019 SquVietna2

    4/65

    0.7 Martingales

    (ef. [30] pagos 8-12 ; [20] pagos 11-30.)

    X

    */

    (i) Xt 6 L 1(Q, A, P), Vt 6 R+,

    (ii) Vs < t ,E[X t/F s] = X s. (resp .)

    Lemm a 0 .15 . Let X be a martingae and (p a convex unction su ch that Vt 0(Xt) 6L l ,then (X ) is a sub-martingae ; if is a concave unction, then (X ) is an upper- m,artnqae. Jensen

    Proof exerdse,

    De n t o n 0.16. The martingae X is said, to be closed by Y 6L 1(Q, A , P ) if X t =E [Y/Ft].

    P ropos t on 0.17. Any mart nqale cidmMs a cdq m,odificaton (cf 30]).

    Xsuch thatsupt E[|Xt|] < TO. Then limt ^ Xt exists amost surey and beongs to L1(Q, A, P).

    I f X is closed by Z it is too by limt^ X t , denoted as X, equal to E[Z/ vt> F t].?? ?? ?? ??? ??? ???

    Corollary 0 .19 . A belou bounded upper-m,artnqale converqes almost surely to in inity.

    Xlimit Y of X t when t goes to infinity exists and belongs to L 1. Moreover X t= E[Y/Ft].

    X X

    Xt Y, Y L 1, t

    {Xt,t 6 R+} is a martingae.

    or

    (ii) (Xt) L 1 converges to Y when t goes to in nity.

    Th eorem 0.22. Doob: Let X be a cd sub-martingae with termina value X ^ and S andT, S < T be two stopping times. Then:

    X S < E[Xt/ Fs ] P amost surely.

    Proof: pages 19-20 [20].

    De n t on 0.23. The increasing process (M} (bracket) is de ned, as:

    t (M}t = lim p r o b a ' (M ti Mti_1)2|nH ^

    n being partitions of [0,t] and |n| = supj(Ti+1 ti).

    4

  • 7/31/2019 SquVietna2

    5/65

    In next chapter, we will show that ifM = B t Brownian m otion then (B)t = t.

    R e m a r k 0 .24. The squarred, integrabe martingaes admit a bracket.

    (M tMt2 (M)t is a martingae.

    Corollary 0.26 . For any pmr s < t, E[(Mt M s)2]/Fs] = E[(M)t (M)s/Fs].

    This proposition is often used as the bracket dehnition and then 0.23 is a consequence.

    The following concerns genera culture, but out of the agenda

    DeAnit i on 0 .27. Let X and, Y two processes, X is said to be a modiAcation oY i f :

    Vt > 0, P{Xt = Yt} = 1.

    X Y

    P{Xt = Yt, Vt > 0} = 1.

    R e m a r k 0 .28 . This second notion is stronger than the first One.

    X(Ft, t > 0) if Vt > 0,v A B(R) :

    {(s, )/0 < s < t ; X s() A} B([0, t]) F t,

    m,eaning that the appication on ([0, t] X n, B([0,t]) T) : (s,u>) ^ X s(u) is measurable.

    Xprogressively measurable m,odifica,t,ion.

    Proof: cf, Meyer 1966, page 68 .

    P r o p o s i t ion 0.31. Let X be a F-progress ivdy measurabe process and T be a (Ft ) stop-ping time. Then(i) the appication u ^ X T(w)(u) is FT-measurable(ii) and the process t ^ X tAT is F-adapted.

    XA,

    Vt, {(s, u) , 0 < s < t, X s(u) A} B[o t] F t.

    Then {u : X T(u)(w) A} n {u : T( u < t} = {u : XT(u)At(u) A} n {T < t}.

    T is a F-stopping time, so the second event belongs to F t , and because of progressivelymeasurability the first is too.

    (ii) This second assertion moreover shows that X T is to 0 F-adapted.

    5

  • 7/31/2019 SquVietna2

    6/65

    or cg trajectories, it is progressively measurable.

    P roof: Define

    X

  • 7/31/2019 SquVietna2

    7/65

    1 Introduction of Wiener process, Brownian motion

    [20] pages 21-24 ; [30] pages 17-20.Historically, this process first modelizes the irregular motion of pollen par ticles sus-

    pended in water, observed by Eobert Brown in 1828. This leads to dispersion of micro-par ticles in water, also called a diusion of pollen in water. In fact, this movement iscurrentlv used in manv other models of dvnamic phenom ena:

    - Microscopic particles in suspension ,

    - Prices of shares on the stock exchange,

    - Errors in phvsical measurements,

    - Asymptotic behaviour of queues,

    - Anv behaviour from dvnamic random (stochastic dierential equations).

    Bspace ( , A, F t, P), adapted, continuous, taking its values in Rd such that:

    (i) B0 = 0, P-almost surely sur ,

    ( ) Vs < t, B t B s is independent of F s, with centered, Gaussian law with variancematrix (t s)Id.

    Consequently, let a real sequence 0 = t0 < t 1 < < tn < TO, the sequence (Bt.

    B t._ 1)i follows a centered Gaussian law with variance matrix diagonal, diagonal (ti ti -1)i .B

    The first problem we solve is the existence of such a process. There are several classicalconstructions.

    1.1 Existence based on vectorial construction, Kolmogorov lemma

    ([20] 2.2 ; [30] pages 17-20.) Very roughly, to get an idea without going into detailedproofs (long, delicate and technical), we proceed as follows. Let be C(R+,Rd) and

    B ( t ,u ) = u(t) be the coordinate applications called trajector ies. Space is endowedwith the smallest -algebra A which implies the variable {Bt , t R+} measurable andwith natural hltration generated bv the process B : F t = {Bs, s < t} . On ( , A)the existence of a unique probability measure P t proved, satisying Vn N,t_, ,t n R+,B 1, , Bn Borelian of Rd :

    P{u/ u( ti) BiVi = 1, , n} = p(t1, 0,X1)p(t2t1,X1,X2) p( tn tn-1,Xn-1,Xn)dX1..dXn,J Bi J Bn

    wherep(t, X, y) = -=te - iJ^ .Then the point is to show:

    A.

    7

  • 7/31/2019 SquVietna2

    8/65

    - Under this probability measure, theprocess t u(t) is a Brownian motion accordingto the original dehnition .In fact, this defines a probability measure on the Borel sets of another space Q' =A(R+, Rd), n not being One ofits Borelian sets. Instead of that, we choose n = A(R+, Rd)and Kolmogorov theorem (1933).

    DeAnit ion 1.2. A consistent family of nite imensiona distributions (Qt , t n-uple R+)is a famiy of measures on (Rd,B(Rd)) such that

    - if s = (t), s and t 6 (R+ )n, a permutation of integers { 1, ,n} A 1, ,A n 6B(Rd), then Qt(A 1, , An) = Qs(A (1 )) )A (n),

    - and ifu = (t 1 , , tn - 1 ), t = (t 1 , ,tn-1 ,tn), Vtn, Q t(A1 , , A n - 1, R) = Qu(A1 , , An-1 ).

    Theorem 1.3. (cf [20] page 50 : Kolmogorov, 1933) Let (Qt,t 6 (R+)n) be a consistent

    famiy of finite dimensiona distributions.Then there exists a probabiity measure P sur (Q, B(Q)) such that pour tout B 1, ,B n 6B( Rd),

    Qt(B 1 , ,Bn) = P{ u /u ( t ) 6 B i, i = 1 , ,n }.

    We app lv this theorem to the familv of measures

    Qt(A1 , ,A n )= p(t1, 0,X1) ,p(tn - tn - 1,X n-1,Xn)dx.^ nA

    Then we show the existence of a continuous modihcation of the process= coordianteapplications of n (Kolmogorov-Centsov, 1956), to get to the existence of a continuousmodihcation of the canonical process:

    Xrea random process on (Q,A, P) satisfying:

    3a , /3 ,C> 0 : E|Xt - Xs\a < C|t - s |1+ , 0 < s,t < T,

    then X admits a continuous modification X which is locally YHlder continuous:

    3 7 g]0, 3h variable aatoire > 0 , 3 > 0 :a

    P{ sup __ |Xt - XXs| < |t - s|Y} = 1 .

  • 7/31/2019 SquVietna2

    9/65

    R e m a r k 1 .5. This metr ic implies a topoloqy Uhich is the uniorm, on any compcic con-vergence in probability. = D(R+, R) is a compete space with respect to this norm (cf.30], paqe 9.)

    A = {u / ( u ( t1), , u ( tn)) B} where B t a Borelian set of Rn and t an n-uplet of

    weshow:

    P roposition 1.6. ( . 2, [20] page 60) Let Q be the -agebra generated, by thecyindrica sets reated to n uplets (ti) such that Vi, u < t.

    - G = VtGt coincides with ( , p) Boreian sets.

    - I f

    t :

    u (s u(s A t))

    then Q = - l (G) meaning t = C([0,t],R) Borelian sets.

    Tho construction is basod 011 Central lim it t hoorom.

    T heorem 1 .7. et (n) N be a sequence of random variables, indpendent, same law,centered,with variance 2. Then

    1 nSn = 7= > converqes in distributon to X of lam Af ( 0, 1).

    n v i=1

    This tool will allow us to oxplicitolv build t ho Brownian motion; t ho following thooromis callod Donskers nvarance theorem .

    ( , A , P )indpendent, same law, centered,with variance 2 > 0. Let be the amily of continuous

    processes

    1 ntX ? = + ~ n

    j =1

    Let Pn be the measure induced by X n on (C(R+, R), Q). Then Pn weakly converges to P*;measure under which Bt(u) =

  • 7/31/2019 SquVietna2

    10/65

    1 .2.1 Tigh t amilies and re lative compacity

    DeAnit ion 1 .9 . Let (S, p ) be a metric space and n a amily of probability measures on(S, B( S )); n is said to be relative ly compact if a weaky sub-sequence can be extracted

    from n.

    The amiy n is said to be tight if

    Ve > 0, 3K compact c S such that P(K) > 1 - e, VP 6 n.

    Similarly, a famiy of random variables {Xa : (Qa , Aa ) ; a 6 A} is said to be re l-a t ive ly compac t or t igh t if the famiy of reated probabiity measures on (S, B(S)) isrelatively compact or tight.

    We admit the following theorem.

    Theorem 1.10. (Prohorov theorem,, 1956, [20] 4-7)Let n be a family of probability measures on (S, B(S )). Then n is relatively compact ifand only if it is tight.

    This theorem is interesting since relative compacitv allows to extract a weaklv conver-gent sequence, but the tightness property is easier to check.

    DeAnit ion 1.11. O n = C(R+); continuity m o d u lu s on [0,T] is the quantity

    mT(u,S) = m ax | w ( s ^ u(t)\.|s-t| e} = 0,VT > 0, Ve > 0.^^ n>1

    Proof It is based on the following lemma:

    L em m a 1.13. ([20], . 9 page 62: Arzel-Ascol i theorem) Let be A c n. Then A iscompact if and only if

    sup |u(0)| < TO et VT > 0, lim sup mT(u,S) = 0.{weA} {weA}

    10

  • 7/31/2019 SquVietna2

    11/65

    (Xn)

    (1.8), we introuee notions of eonvergenee rolatod to procossos. Tho eonvergenee in lawprocoss as a wh ole is di cult to obtain, Wo introuee a eoneept oasior to voriy.

    (Xn)

    d s t r bu t o n to the process X if Vd N and for any d-upet (t1} ,t d), ( x n , ,Xtnd)converges in distributon to ( Xt l , , x td)

    duplets,

    ( X n)

    X X

    P roof: indeed, Vd and for aand n o X n = (Xtl, ,X' l) converges converges indistribution to n o X since continuity keeps the convergence in distribu tion .

    Warning! tho eonverse is not ahvavs truo! It can bo seen in tho following oxamplo asan Exorciso:

    X = n t l o ^ i t ) + (1 - n t ) l [ i .

  • 7/31/2019 SquVietna2

    12/65

    1.2.2 Don sker nvarance pr n cple a n d Wi en er measure

    In t his seetion weprovo t ho thoorom building Brownian motion. Wo study tho sequenee ofprocesses defined in principal theorem thanks to independent random variables (j, j > 1 ).We nood:

    - to prove t he convergence of sequence of processes (Xn,n > 0),

    - to prove t ho proportios of tho limit eonveniently to tho initial enition, Thus thoschomo of tho proof is:

    1) this sequenee eonverges in finito imensional distribu tion to a proeess with Brownianmotion proportios,

    2) t his sequenee is tight and Theorem 1.16 can bo applie,

    P ropos t on 1.17. (cf . 17 20]) Let be:

    when n goes to inhnity. Thon it is enough to got tho distribution eonvergenee of (S').condude the proo as an Exercise.

    Remark tha t ( S , t > 0) is an inepenent inerements procoss; if (t) aro inereasingordorod, tho d, random variablos (SV S2 SV --- , S S _ ) aro inepenent, Tho

    application from Rd to Rd : x (x1,x1 + x 2, ,^2 i xi) is continuous and distributio n

    X? 7= ( X / ^ ' + ~ n

    1 [nt]

    Then, Vd, V(t1, , td) 6 R + we get the distribution convergence:

    Proof: a first simplification uses:

    1[nt]

    Romark:nt - [nt]

    / s>[]+!' nBienaym -Tehebiehev inegality v iolds:

    dp { i i A 7 - 5 r i i > } < ^ i i e i i 2- 0

    - n zz

    12

  • 7/31/2019 SquVietna2

    13/65

    convergence is m ainta ined bv the continuitv. Then it is enough to look at the distributiond

    (1 ) n(ui, ,u ) = = n j_ E [e^r I '["j - l] 1) be a sequence of random variabes, inde-pendent, same lau!, centered, variance 1, and et be Sj = k=1 k. Then:

    Ve > 0,limlimn^ oop{max{i 0, limlimn^ 00P{max{i

  • 7/31/2019 SquVietna2

    14/65

    Dent on 1 .20. The measure P weak imit of probability measures Pn is the Wienermeasure on Q.

    1.3 Properties of t raje ctories of Brownian motion

    1.3.1 Gau ssan process

    Den t on 1.21. A process X is said to be Gau ss ian if Vd, V(t1, , td) positive reanumbers, the vector (Xtl, ,X ) admits a Gaussian aw. I f the aw (Xt+t.; i = 1 , ,d)

    t X

    X

    _________________p(s, t) = E[(Xs - E(Xs))(Xt - E(Xt))T], s,t > 0._________________

    Proposton 1.22. Broumian mot ion B is a centered cont inuous Gaussian process withcovariance p(s, t) = s A t.

    Reciprocally, any centered continuous Gaussian process with covariance p( s, t) = s A tis a Broumian motion.

    The Broumian motion converqes in mean" to zero:

    B t - 0

    t

    t

    P roof s. The third point is more or less a law of large numbers.

    Othor Brownian motions can bo obtainod bv Standard transormations, for instanee chang-ing tho hltra tion.

    (i) chango of scaling: :FC).

    (ii) inversion of timo: ( t,F ), avoc Y = t.Bi si t ^ 0, Y0 = 0 et t = {Y s, s < t.}.

    (iii) reversing time: (Zt,F tZ), avec Zt= B TBt et F tZ = {Zs, s < t}.(iv) symmetry: (B t , Ft).

    In oach ease wo havo to chock that it is an aapte continuous procoss satisying thocharactoristic propertv of Brownian motion or: th a t it is eentere continuous Gaussian

    p(s, t) = s A tTho only di cult caso is (ii) (Exorciso).

    Notation : nn = (to = 0, , tn = t) is a subdivision of [0, t], e ||nny = supi{ti ti-1}, called the mesh ofnn.

    14

  • 7/31/2019 SquVietna2

    15/65

    1.3.2 Zeros set

    This set is X = {(t,u>) 6 R+ X n : B t (u) = 0}. Let fixed a trajectorie u, denote= {t 6 R+ :Bt () = 0}.

    Theorem 1.23. (cf. [20] 9.6, p. 05) P-almost surely with respect to u

    (i) Lebesgue measure of is null,

    ( ) X dos&d no boune,

    (Ui) t = 0 is an accumulation point o ,

    (iv) is dense in itself.

    Proof too di cult Exorciso.... out of tho agena,

    1.3.3 Varatons ofthe trajectores

    (cf. [20] pb 9.8 p. 106 et 125)

    Theorem 1.24. (cf. 30] 28 p. 18)Let nn be a sequence of subdivisions of interval [0,t] such that nn c nm if n < m, and themesh of nn, denoted as ||nn, II goes to zero Uhen n goes to infinity. Let benn(B) = ^ t .en (Bt.+1 Bt.)2. Then nn(B) goes 10 t in L2(Q), and almost surely if

    E n 1 nn 1< TO, when n goes to in nity.Proof : Let be zi = (Bt.+1 B .) 2 (ti+1 ti) ; i zi = nn(B) t. It is a centered

    independent random variables sequence since B t .+1 B. law is Gaussian law with nullmean and variance ti+1 ti. Moreover we compute the expectation of zi2 :

    E[zi2] = E [(Bti+1B ti )2(ti+1ti)]2 = E[(Bti+1B ti)42 (Bti+1B ti)2(ti+1ti) + (ti+1ti)2].

    Knowing tho momonts of Gaussian law, we got:

    E[z2] = 2(ti+1 ti)2.

    The independence between the zi show that E[ ^ i zi)2] = E[(zi)2] equal to2J2i(ti+1 t,)2 < 21nnII.t, which goes to zero when n goes to inhnity. This fact yieldsL2(n ) convergence (so probability convergence) of nn(B ) to t. J k ^

    If moreover II Kn II< co, thon P{|7T(H) t\ > e} < \ '2 II 7Tn II t. Thus tho soriosP{|nn(B) t| > e} converges et Borel-Cantelli lemma proves that

    limsup

    P[limn{|7rn(H) - t\ > e } ] = 0 ,

    moaning:

    P[nn um>n {|nm(B) 1 | > e}] = 0, Ve > 0, almost surelv u n n m>n{|nm(B ) 1 | < e} = n,

    this expresses almost sure convergence of nn(B) to t.

    15

  • 7/31/2019 SquVietna2

    16/65

    T heorem 1.25 . (cf. /20] 9.9, p.106) Cai nay phai hieu the nao????

    P{w : t ^ B t (u) is monotoneous on any interva} = 0.

    P ro o f : let us denote F = {u : there exists an interval where t ^ B t (u) t monotoneous}.This could bo oxprossod as:

    F = UsteQ0 0} = . For anv n P( ) < P(A) thusP(A) = 0 for all s and t proving P(F) = 0.

    Theorem 1.26 . (cf. 20] 9.18, p.110 : Paley-Wiener-Zyqmund, 1933)

    P{w : 3t0 t ^ B t (u) differentiabe with respect to t0} = 0.

    More spec.ificay, denot nq D +f(t.) = lini/^o ^I l ; D+f( t) = lmfe ,there exists an event F of probability measure 1 incuded in the set:

    { : Vt, D +B t(u) = +TO ou D+Bt(u) = to}.

    Proof :

    Let be u such that there exists t such that t o < D+Bt(u) < D+Bt(u) < + t o . Then ,

    3j, k such that Vh < 1/k, lBt+h B t l < jh.

    We can find n greater than 4k and i,i = 1, ,n, such tha t :

    i 1 i , i + V V+ 1 1------ < t < , and i u = 1,2,3 : ------ t < < 7-.

    n n n n k

    These two romarks and trianglo inequalitv \Bi B_\ 4k,3i e {1, ,n} such that t e , -1, V = 1, 2, 3 : IBi+VBi+V-11< (2/+1

    L * * J L n >71-15 5 5 1 - ^ 1 n

    B

    V// = 1, 2, 3 : S +: - i+-i1 4k,3i = 1, ,n ,u = 1,2,3 : l-Biii- B i+v-11< -----n n /7,

    is bounded by nji'f27 \/n > 4k, thus goes to zero when goes to inhnity.

    DeAnit i on 1.27 . Lef be a unction de ned on interval [a, b]. We callvariation offon this interval ;

    V ar[a,b](f) = S u p Y ^ |f (ti+1) f (ti)|ti en

    where n belongs to the subdivisions o [a, b] set.

    Theorem 1.28 . (c. [30] p. 19-20 Let a and b be fixed in R+ .

    P{w : Var[a,b](B) = + t o } = 1.

    Proof :Let a and b be fixed in R+ and n a subdivision of [a, b].^ |B(ti+1) B(ti)|2

    (2) Y . ^ IRT \ - R Iten suPtien |B(ti+1) B(ti)|

    The numerator is the quadratic variation ofB, known as converging to t. Then, s ^ B s(u)is continuous so uniformly continuous on interval [a,b]:

    Ve, 3n, II n II< n ^ suptien|B(ti+1) B(ti)| < e.

    1.3.4 Lvy T h eorem

    This theorem gives the magnitude of the modulus of continuity.

    Theorem 1.29. (720] th. 9.25 pp ll -1 15)Let be g :]0, 1] R +,g( ) = \J 2 log( ). Then,

    ]P{ : lini

  • 7/31/2019 SquVietna2

    18/65

    ??????

    Theorem 1.30. (cf. [30] 31 p.22-23) Let be F t = (Bs, s < t) V N . Then the fitrationF is right continuous, meaning that Ft+ := n s>tFs coincides with F t .

    P ro o f (Exereise) usos t ho faet t hat

    ^U] , Vu2 , Vz > v > t,

    E[ei(uiBz +u2Bv)/F t+] = lim E[ei(uiBz+u2Bv)/F w] =w\t

    E [ei(uiBz +u2Bv) /F ]

    meaning that the F t+ and F t conditional laws are the same ones, so Ft+ = Ft

    1 .3.5 M arkov a n d m ar t nga le propert es

    Tho Brownian motion is a Markov procoss, meaning that:

    Vx e R, V f bounded Borelian, E x[f (Bt+s)/Fs] = Eb s [f (Bt)].

    The proof is easy, possibly handmade : under Px, B t+s = x + Wt+s and

    f (Bt+s) = f (x + Wt+s Ws + Ws),

    we conclude using independence ofx + Ws and Wt+s Ws.B

    1.4 Computa tion of2 /0 B sdBs (Exercise)

    BThe intuition could say that it isB2, but it is not, To stress the dierence between both,we decompose B"2 as a sum of differences along a subdivision of interval [0,t], denoted asti = it/n,

    B t = ^ s (Bti+1 B ti) = ^ 2Bti[Bti+i Bti] + ^ [Bti+i B ti]2.i i i

    2 0t BsdBs0

    is tho Paradox. We havo to romark that, bv enition of Brownian motion, t his seeonn

    t/iv, thus it is a random variablo with ~Xn law- oxpoctation is t and its varianco ist2/n V a r 2'. thus this term L2-converges (thus probability convergence) to its expectationt

    B t2 = 2 BsdBs + t.

    0

    18

  • 7/31/2019 SquVietna2

    19/65

    2 Stochastic integral

    The main purpose of this chapter is to give meaning to notion of integral of some processeswith respect to Brownian motion or, more generally, with respect to a m artingale. Guidedby the pretext of this course (stochastic calculus applied to Finance), we can motivatethe stochastic integral as following: for a moment studv a model where the price of ashare would be given bv a m artingale Mt at time t. If we have X (t) of such shares at

    t tk

    Y , X (tk-1)(MtkMtk-1).k

    t

    mathematical tool to move to limit in t he above expression with the problem, especiallyifM = B, t he derivative B' doesn t exist! this expression is a sum which is intendedto converge to a Stieljes integral, but since the variation V(B) is inhnite, this can notconverge in a determ inistic sense: the stochastic integral naive is impossible (cf.Protter page 40) as the following result shows it .

    Theorem 2.1 . Let n = (tk) be a subdivision of [0,T]. I flim |n|^0 k x( tk-1)( f (tk) f (tk-1)) exists, then f is fini te variation. (cf. Protter, th. 52,

    page 0)

    The proof uses Banach Steinhaus theorem, id est: if X t a Banach space and Ynormed vectorial space, (Ta) a sequence of bounded operators from X to Y such thatVx 6 X, (Ta(x)) t bounded, then the sequence (II Ta II) t bounded in R.

    Reciprocally, we get: V ( f) = +TO yields the limit doesn t exist, this the case iff : t ^ B t is Brownian motion . We thus must find other tools. The idea of Ito was torestrict integrands to be processes that can not "see" the increments in t he future, thatis adapted processes, so that, at least for the Brownian motion , x(tk-1) and (BBt -tk-1)are independent, so trajectory by trajectory nothing can be done. But we will work in

    probability, in expectation .

    The plan is as follows: after introducing the problem and some nota tions (2.1.1),

    we first define (2 .1 .2) the integral on the simple processes (S denotes the set of simpleprocesses, which will be defined below). Then 2.1.3 will gi ve the properties of this integralover S thereby extended by continuity on the closure of S for a well chosentopology, so to have a reasonable amount of integrands.

    2.1 Stochast ic integral

    2.1.1 Introdu ction an d notations

    M

    (Q, F t, P) where F t is for instance the natural hltration generated bv the Brownian motion ,

    19

  • 7/31/2019 SquVietna2

    20/65

    completed by negligeable events. For any measurable process X, Vn e N and Vt e R+ letus dofino:

    / (X ) = E A' < ^ A)(A * -j

    This quantity oes irt nocossarilv havo a limit. Wo havo to rostrict to a class of almostsurely square integrable (with respect to increasing process (M} defined below), adapted,measurable T>roc0SS0SX

    Dent o n 2.2 . The increasing process {M} is defined as:

    t ^ (M} t = lim probabili ty ^(Mt. Mt._i )2|nH0 ti&n

    where n describe the subdivisions of [0,t]. It is named bracket.

    The construction ofI (X , t) t due to Ito (1942) in case ofM Brownian motion , andKunita and Watanabo (1967) for squaro intograblo martingalos. An oxorciso in Chaptor1 w ith M = B proves (B}t = t.

    Remark 2.3 . The square inteqrabe cont nuous martnqales admi t a bracket.

    Reeall:

    (M}t

    Mt2 (M} t

    Vorv often, this proposition is brackot enition, and thon Denition 0.23 is a eonse-quonco.N o taton: let us define a measure on -algebra B(R+) F as

    M (A) = E [ A(t ,u)d(M }t(^)].J0

    X and Y are said to be equivalents ifX = Y M p.s.N o tation: for any adapted process X, we note [X\T = E[JT X?d(M}t].

    Remark tha t X et Y are equivalent if and onlv if [X]T = 0VT > 0.

    Lot us introuee t ho following sot of procossos:(3)

    L ( M) = { classes of processes X measurable F-adapted such tha t sup[X]T < +to}T

    enowe with tho metrie:

    (4) d (X , Y ) = ^ 2-nl A [X Y]n,

    n>1

    20

  • 7/31/2019 SquVietna2

    21/65

    thon tho subset of tho provious:

    L* = {X 6 L progressively measurable}.

    When the martingale M is such tha t (M) is absolutelv continuous with respect toLebesgue measure, since any element of L admits a modication in , in such a case, wemanage in L, but generallv, we will restrict to L*.

    P ropos t on 2 .5. Let L T be the set of adapted measurabe processes X on [0,T] suchthat:

    [X]T = E[ Xs2d(M)s] < +ro.0

    T, set o progressively measurabe processes o CT, is closed in CT . In particular, it is

    compete for the norm [.]T.

    Proof: Let (Xn) be a sequence in LT; converging to X: [XXn]T ^ 0. It is a sequencein L 2 space, thus complete and X 6 LT, convergence L 2 yields the existence of an almostsurely convergent subsequence. Let Y be the almost sure limit on Q X [0,T], meaningth a t A = {(w,t) : limnX n(u,t) exists } has probabilitv equal to 1 and Y (u , t ) = X ( u , t ) if (u, t) 6A, and if not is equal to 0, The fact tha t Vn, X n 6 LT shows that Y 6 LT an(lY t equivalent to X,

    2.1.2 Integ ral of smple processes a n d extenson

    X(ti) increasing to infinity and a amily ( ) of boundedF t.measurable random variabessuc.h that:

    r o

    X t= Col{0}(t) + ^ Cil]titi+1](t).i=1

    Denote S their set, note the inclusions S c * c . (to check as )Exercse: compute [X]T when X 6 S .

    Dent o n 2.7. Let be X 6 S. The stochastic integra of X with respect to M isro

    !t(X) = ^ Ci(MtAti+1 MtAt ) .i=1

    Wo havo now to exten this enition to a largor class of integrans,

    Lemma 2.8. For any bo u nded process X 6 L there exists a sequence of processesX n 6 S such that supT>0 limn E[f 0 (Xn X )2dt] = 0.

    21

  • 7/31/2019 SquVietna2

    22/65

    P roof

    (a) Case when X is eontinuous: sot X = 011 tho interval Y~r\. Bv continuity,

    obviously Xtn ^ X( almost surely, Moreover by hypothesis X is bounded; dominatedeonvergenee theorem allows to eonelue,

    (b) Case when X E * . set X n = m f(t -1/ )+X sds, this One is continuous and staymeasurable adapted bounded in L. Using step (a) Vm, there exists a sequence X m,n ofsimple processes converging to X m in L 2([0,T] X n,dP X dt) meaning that:

    (5) Vm VT lim E[ (X]n'n - X n)2dt] = 0.n^ J0

    Let be A = {(t ,u) e [0,T] X Q : limm ^ Xt n(u) = X t (u)}c and its u-sectio n Au,Vu. Since X t progressively measurable, A u E B([0,T]). Using Le besg u e fondamenta lth eo rem (cf. for instanco STEIX: "Singular Intograls and Di erentiability Proper ties of

    X

    X m - Xt = m (Xs - Xt)ds ^ 0J (t-1/m)+

    for almost any t and Lebesgue measure ofA u t null. On another hand, X and X m areuniormly bounded; bounded convergence theorem in [0,T] proves tha tVu f0T(Xs - Xxn)2ds ^ 0.

    Once again we apply bounded convergence theorem but in n so tha t E [ ( X s -

    X n ) 2ds] ^ 0. This fact added to (5) concludes (b).X

    that anv adaptod moasurablo procoss admits a progrossivoly moasurablo modihcation,named Y. Then there exists a sequence (Yn) of simple processes converging to Y in

    L2([0,T] X n ,dP X dt):

    E [ (Ys - Ysm)2ds] ^ 0 et Vt P(Xt = Yt) = 1.0

    Set nt = 1{Xt=Yt} Using Fubini theorem we get:

    E [ ntdt ]= P(Xt = Yt)dt = 0 0 0

    thus / 0T ntdt = 0 almost surely.

    f Tnt + 1 {Xt=Yt} = 1 ^ E[ 1 {Xt =Yt}dt] = T and l{Xt=Yt} = 1 dt XdP almost surelvt t 0 t t t t

    Finally:

    E [[ T(Ys - Ysm)2ds] = E l [ T 1{Xs=Ys}(Ys - Ysm)2ds] = E [ T (Xs - Ysm)2ds]0 0 0

    whieh givos tho conclusion.

    22

  • 7/31/2019 SquVietna2

    23/65

    Propo s i tion 2 .9. I f the increasing process t ^ (M )t is absoutely continuous with respectto dt F-almost surey,then the set S is dense in the metric space (L, d) with metric d de nedin ().

    P roof

    (i) Let be X 6 L and bounded: the previous lemma proves the existence of a sequenceof simpleprocesses (Xn) converging to X in L2(Qx [0, T], dPdt), VT. Thus there exists analmostsurely converging subsequence. Bounded convergence theorem and d(M)t = f (t)dtgot tho conclusion.

    (ii) Let be X 6 L no bounded: set Xtn(w) = Xt(w)1{|Xt(u)|n}d(M )s] ^ 0

    00 X 2

    (bounded convergence theorem), But Vn X n 6 L and are bounded: their set is dense inL.

    (iii) The set of simple processes is dense in the subset of bounded processes of L; (i)

    This proposition therefore provides the density of simple processes set in L in the caseM t dt

    density of simple processes only in L* with the following proposition .

    Propos t on 2 .10. S is dense in the metric space (L*, d) with metric d defined in ().

    Proof: Cf. Proposition 2.8 and Lcmma 2.7. in [20], pagos 135-137.

    R e m a rk 2 .11. dd (Xn ,X) ^ 0 when n goes to infinity if and only if

    V T > 0 , E [ |Xn(t) X (t) |2d(M)t] ^ 0.0

    2 .1.3 Constru ct o n of the stochas tc ntegral , elementar y propert es

    X

    ro

    I t ( X) = ^ 2 / ^i(MtAti+1 MtAti ) .i=1

    Let us denote I t ( X) = /0 XsdMs in case of integrator M. This simple stochastic integraladmits tho follow ing properties (Exorciso):Exe rc se . Let S be the set of simple processes on which the stochastic integral with

    M

    I t ( X) = ^ ^ Zj(Mtj+1AtMtjAt) .j

    23

  • 7/31/2019 SquVietna2

    24/65

    Prove tha t I t satisfies the follow ing properties

    (i) I t is a linear application .

    (ii) I t( X) is square integrable.(iii) Expectation ofI t ( X) is null.

    (iv) t ^ I t ( X ) is a continuous m artingale.

    ') E t( X )]2 = E[ 0 ( M ),]

    vi) E U M X ) - Is (X))2/F.] = E ' s X d(M)u/F\ .

    (vii) II ( X )), = f0 X d(M),.It

    We now exten tho sot of integrans ovor simplo procossos thanks to abovo ensity rosultsthon we chock that this new oporator satisfios t ho samc proportios.

    P ropos t on 2 .12. Let be X L* and a sequence of simpe processes ( X n) converging toX . Then the sequence (It (X n)) is a Cauchy sequence in L 2(Q). The limit doesr t depend,ofthe chosen sequence (it is denoted, as I t( X ) or /0 X sdMs ou (X .M )t), stochastic integral

    X M.

    P roof: using property (v) above we compute the norm L 2 o I t( X n)-.

    II I t (X n) - I t (Xp) 112= E [ f X - Xp\2d(M)s] ^ 0

    0Vt > 0 since d(Xn,Xp) ^ 0. Clearlv the same kind of argument proves that changing

    X

    II I t (X n) - I t(Yn) ||2^ 0

    along with d( X n, Y n) < d( X n, X ) + d(X, Y n).

    We now provo tho proportios:

    Propos t on 2 .13. let be X L*, then:

    I t

    (ii) It(X) is square integrabe.

    (Ui) Expectation of It(X) is null.

    (iv) t ^ I t (X ) is a continuous martingae.

    (v) E[I t(X)]2 = EUX ( M) J .

    (vi) EHIt( X ) - Is(X ))2/ F, ] = E [ f X'Ud{M) J T ,

    (mi) (I. (X))t = S0 X 2J ( M )

    (vi) EU h(X ))2/F ,] = I (X ) + EU IX2d( M)u/F s].

    (v ) ( I . (X))t = f0 X d (M )s .

    24

    s

  • 7/31/2019 SquVietna2

    25/65

    P roof: most of these properties are obtained passing to the L2 limit of propertiessatished bv It(Xn) Vn, for instance (i) (ii) (iii) (iv); (concerning (iv) note that the set ofsquare integrable continuous martingales is complete in L2).

    (v) is a consequence of (vi) with s = 0.

    (vi) Set s < t and A G F s, and com pute:

    E[1A(I t (X) - I s (X))2] = limE[1A(I t(Xn) - Is(Xn))2] =n

    lim E[1 a (Xn)2d(M)u]= E[1 a X 2ud(M)u]n s s

    since d(Xn , 0) ^ d(X, 0).(vii) is a consequence of (vi) and second characterisation of bracket (0.25).

    Proposi t ion 2 .14. For any stopping times S and T, S < T, and t > 0 we get:

    E[ItAT (X)/Fs] = ItAS (X ).

    I f X and Y G c*, almost surely,

    p tATE[(ItAT(X) - ItAs( X))(ItAT( Y) - ItAs(Y))/Fs] = E[ / XuYud(M)u/ Fs] .

    tAs

    Proof: t ^ I t ( X) is a m artingale, we apply Doob theorem concerning the stopping

    between two bounded stopping times S A t et T A t. Remark tha t E[ItAT(X) /F s] is FsAtmeasurable, thus equal to E[ItAT(X)/FsAt]. This is exactly the first point.

    Moreover, the bracket ofI . (X) is /0 X^d(M)s, so I t ( X)2- /0 X^d(M )s is a martingale;once again we apply Doob theorem concerning the stopping between two bounded stoppingtim es S A t et T A t, m eaning

    p T AtE[ ItAt ( X)2 - IsAt(X)2/FsAt] = E[ / X 2ud{M)u/FsAt].

    sAt

    This implies the second point using the previous remark on mesurability, finally we con-

    clude using polarisation argument.

    2.2 Quadratic variat ion

    (M)tM

    continuous martingales M and N, ifn are subdivisions dof[0,t], is defined as

    (M, N )t = hm proba J 2 (Mti+i - Mti)(Nti+i - Nti)|nH t e

    25

  • 7/31/2019 SquVietna2

    26/65

    or equivalentlv4(M,N)t := (M + N)t - (M - N)t.

    So, in case of X and Y 6 L*(M), we now can study the bracket (I( X ),I (Y )). Butpreviously we recall som e useful results on the brackets of square integrable continuousmartingales.

    M N

    (t) | (M,NM 2 < (M )t(N )t ,

    (ii) MtNt- (M, N)t is a martingae.

    Proof: (i) is proved as any Cauchy inequality. Since M + N is a square integrablecontinuous martingale, the dierence (M + N )2 - (M + N )t is a martingale and (ii) is a

    Proposi t ion 2.16. Let T be a stopping time, M and N be two square integrabe contin-uous martingaes. Then: (MT,N) = ( M , N T) = (M , N)T .

    Proof: cf. Protter [30] th.25, page 61.

    Let n be a subdivision of [0,t].

    (Mt ,N)t = Um ^ (M T +1 - M T)(N ,+1 - Nu).i

    The familv (ti A T) t a subdivision of [0, tA T ].

    (M , N )tAT= lim y ^ (MT Ati+1 - MTA ti) (NT Ati+1 - NT At).17T| ^i

    The dierence between these two sums is null on the event {T > t} and on the com plement{T < t}, it is

    (MT- Mti)(NtAti+i - NTAti+1) ,

    the index i such that T 6 [ti,t t.+1]. All these processes are continuous, so the limit

    M Ncontinuous martingales, X 6 L*(M) et Y 6 L*(N). Then almost surely:

    (6) ( r lXsYsld(M, N)s)2 |Xs|2d(M )s *|Ys|2d(N )s..7 .7 .7

    Proof:

    (i) first remark the almost sure inequalitv:

    (M, iV)t - (M, iV)s < ^ d(M)u + d (N )u)

    26

  • 7/31/2019 SquVietna2

    27/65

    consequence of inequality :

    2 E < M>.+. - Mt , ) , + , - Nt i) < ( M ll+, - M . )2 + i (Ntt+, - Nk)2

    i i iwhere we pass to probability limit thus almost sure for a subsequence.

    Let A be the increasingprocess (M) + (N ).All the nite variationprocesses (M ), (N ), (M, N)A

    d(M, N)t = f (t)dAt, d(M) t = g(t)dAt, d(N) t = h(t)dAt.

    (ii) For anv a and b:

    (.aXsV/g(s) + bYs h s) )2dAs > 0.

    0Using classic method in case of Cauchy inequalities, yields:

    (7) ( r \XsYs\y/g(s)h(s)dAs) 2 < r \Xs\2d(M)s f \Ys\2d(N)s.J0 J0 J0

    (iii) For any a the process (aM+ N) is increasing, so:

    J (a2g(u) + 2a f (u) + h(u))dAu > 0, Vs < t.

    SinceA is increasing, this implies that the integrand is positive: a2g(s) + 2af(s) + h(s) >0 Va G R, meaning f (s ) < \Jg(s)h(s) .This and (7) go to the conclusion .

    Proposi t ion 2 .18. Let M and N be two square integrable continuous martingales, X L*(M) and Y L*(N). Then:

    (8) (X.M, Y.N)t = X uYud(M,N)u, Vt R, P p.s.0

    and

    (9) E [ XudMu YudNu/Fs] = E [ X uYud(M, N )u/Fs], Vs < t, P p.s.s s s

    Proof: i t needs the preliminarv lemmas

    Lemma 2.19 . Let M and N be two square integrabe continuous martingales, and VnX n, X L*(M) such that Vt:

    lim [ \X: - Xu\2d(M)u = 0, P a.s.n 0

    Then:(I M( X n) ,N )t ^ n ^ (I M( X) ,N )t , P a.s.

    27

  • 7/31/2019 SquVietna2

    28/65

    P ro o f of lemm a: Wo look for ovaluato Cauchy rost.

    |( IM( X n) ,N ) t- ( IM(Xp) ,N)t |2 = | ( IM(X n - Xp), N )t|2

    < (IM( X n - X p))t(N )t = /* X - X u|2d(M)u(N)t

    l i n p' f rn in Ca i i r hv -S rhwa r t,7 innnna l i t v m n r n r n i n p ' b r atho inequality coming from Cauehy-Sehwartz inequality eoneerning brackots (cf. Propo-sition 2.15 (i)). Thus tho eonvergenee is an immodiato eonsequenee of tho hypothosis.

    Lemma 2.20. Let M and N be two square integrable cont inuous martingaes and X 6L*(M). Then for almost any t:

    ( IM(X) ,N ) t = X u d (M ,N)uP a.s.

    (Xn) X

    lim E [ X - Xu|2d(M)u = 0.n

    Let t be xed, and a subsequence, converging P a.s. : /t |Xn - Xu|2d(M )u ^ 0. Lemma2.19 provos:

    (10) (IM( X n) , N)t ^ ( IM( X ) , N)tP a.s.

    For simplo procossos:

    (IM( X n ) , N ) t = X i (M + 1 - M, k)(N,k+1 - N,k)i ske [ti ti+1]

    whidi goes t,0J X nd(M, N )u wlien supk |sk+1 - ski ^ 0. Finally

    (11 )

    / t / J ti / J V sk +1 sk / V sk +1 sk / i ske [titi+1]

    nd(M, N)u when supk |sk+1 - sk| ^ 0. Finally

    | / X nad( M ,N )u - / X u d (M ,N)u |2 =J J

    | f (Xun - X u)d (M ,N)u|2 < f X - Xu|2d(M)u(N)tJ0 J0

    nah n m n a litv t,hfvn t,a.kf alm os t Riirp rglrt limusing Kunita-Watanab inoqualitv (6), thon we tako almost suro right limit bv eonstrue-tion of X n. Then (11) goes to zero; this limit and t he previous (10) prove t he result.

    Pro of o f P ropos t on

    (i) Set N 1 = Y.N, Lemma 2.20 v ields:

    (X .M,Ni) t = Xud(M,Ni)uand (M, Y .N)t = / Yud(M,N)u J0 J0

    Wo eompose finito variation integrals to eonelue,

    (ii) Tho proportv is truo for anv simplo procoss; thon take tho probability limit.Exercise.

    28

  • 7/31/2019 SquVietna2

    29/65

    P ropos t on 2 .21. Let M be a square integrable cont inuous martingale and X L*(M).Then X . M is the unique square integrable continuous martingale $ null at t = 0 such

    N

    ($ ,N) t = X u d (M ,N)uP a.s.0

    Proof: actuallv X . M satisfies this relation according to Lemma 2.20. Then let $satisying hypothosos of tho proposition; for anv squaro integrable continuous martingaleN

    ($ - X . M , N ) t = 0, P a.s.

    As a particular case, if we choose N = $ - X . M ,w e get (N)t = 0 P a.s. th a t is

    $ X . M

    = 0, Pa.s.

    Coro llary 2.22. Let M and N be two square integrabe continuous martingales, X L*(M), Y L*(N), T a stopping time such that P a.s. :

    XtAT = YtAT et MtAT = NtAT.

    Then:(X .M )tAT = (Y .N )tAT.

    Proof: let H be a square integrable continuous martingale; using Proposition 2.16:

    (M - N, H)T = (Mt - N t , H ) = 0, P a.s.

    On 0110 han:

    VH, (X . M - Y. N, H)tAT = r T Xu d(M ,H)u - r T Xud (N ,H)u ,0 0

    011 tho other hand Proposition 2.16 and Lcmma 2.20 implv :

    ((X .M) T, H ) = (X.M, H) T = Xud(M, N)u = Yud(N, H)u0 0

    Thus wo can euee with 2.21

    (12) (X . M - Y .N , H)T = 0, P a.s.

    So ( X . M - Y .N ) T is a martingale, orthogonal to any square integrable continuous mar-tingale, and in particu lar to itself, so it is null.

    Propos t on 2 .23. The stochastic integra has associative property: if H L*(M) andG L*(H.M), then GH C*(M) and:

    G.(H.M) = GH.M

    Pro of: , cf. Pro tter th. 19 page 55 or K.s. corollary 2.20, page 145.

    29

  • 7/31/2019 SquVietna2

    30/65

    2.3 Integrat ion with respect t o local martingales

    Corollary 2.22 allows the extension ofintegrators set and integrands set. In this subsec-

    M

    DeAnition 2.24. Let P*(M ) be the set of progressively measurable processes such that

    Vt, X^d(M)s < TO, P a.s.J 0

    DeAnit i on 2 .25. Let be X G P* (M ) and M a oca martingae, with sequence of stoppingtimes Sn . Let be Rn(u) = inf{ t / X^d(M)s > n} and Tn = Rn A Sn We now define the

    X M

    X . M = X Tn.MTn on {t < Tn()}.

    P r o p o s i t ion 2.26. This is a robust defini tion sinceif n < m, X Tn.M Tn = X Tm.M Tm on {t < Tm(u)} and the process X . M so defined is aoca martingae.

    Proof: Corollary 2.22 savs that ift < Tm

    ( XTm Tm)Tn _ ( XT'mAT'n T'mAT'n) _ ( XTm Tm)

    Moreover thanks to this corollary, this dehnition doesnt depend on the chosen se-quence,Finally by construction , Vn, (X . M )Tn is a martingale, and this exactly means tha t X .M

    is a local martingale.

    This stochastic integral doesnt keep all the previous "good" properties. For instanceX . M

    ones concerning conditional expectations. But we have:

    P r o p o s i t ion 2 .27. Let M be a cont inuous local martingae and X G P(M ). Then X.Mis the unique oca martingae $ such that for any square integrabe continuous martingae

    N

    Proof: this is the "local" version of Proposition 2.21. On the event {t < Tn} ,X.MX Tn .MTn and satishes Vt, Vn and anv martingale N,

    t( X Tn .M Tn ,N)t = / XTnAsd(MTn ,N)s

    meaning / QTnAtX ud(M, N )u which converges almost surely to / QtX ud(M, N )u when n goesto inhnity.

    Reciprocally, for any martingale N we get the almost sure equality ($ X .M , N ) t= 0,particularly for N = ($ X .M ) Tn. Thus for any localising sequence (Tn), the martingale($ X . M ) Tn bracket is null; so ($ X .M ) Tn = 0 and almost surely $ = X.M.We implicitly used X T.M = (X.M)T and the result concerning brackets 2.16.

    0

    30

  • 7/31/2019 SquVietna2

    31/65

    3 It formula

    (cf. [20], pages 149-156, [30], pages 70-83)This tool all vvs integro-dierential calculus, usually called It calculus, calculus on

    trajectories of processes, thus the knowledge of what happens to a realization u Q.

    First recall the Standard integration with respect to finite variation processes.

    DeAn it i on 3.1 . Let A be a continuous process. It is said to be n ite variation ifVt, given the subdivisions n of [0, t] we get:

    \A ti+1 - A ti \ < ^ P a.s.

    Such processes, u being fixed, give rise to Stieltjes integral.

    Af of class C 1. Then, f (A.) is a continuous finite variation process:

    f (At) = f (A 0 ) + [ f '(As)dAs.0

    This is the order 1 Tav lor formula.

    These processes joined to continuous local martingales generate a large enough spaceof integrators, defined below.

    Xspace (Q, F , F t, P) P a.s. defined:

    Xt = X0 + Mt + At, Vt > 0,

    uuhere X 0 is F 0-measurable, M is a continuous oca martingae and A = A+ - A - , A+ etA -

    Recall: under AOA hypothesis, the prices are sem i-m artingales, cf. [7].

    3.1 It formula

    Theorem 3.4 . (It 19 , Kunita-Watanab 1967) Let be f C2(R, R) and X a contin-P a.s. Vt > 0

    f { X , ) = f ( X 0)+ f f ' (X , )d M ,+ [ f '{X ,)d A , + I r ( X , ) d { M ) J0 J0 2 J0

    the first integra is a stochastic integral, the two others are Stieltjes integras.

    31

  • 7/31/2019 SquVietna2

    32/65

    Differential notation : sometimes, we say that the stochastic differentiar off (Xt)is:

    df(Xa) = f'(Xa)dXa+ f (Xa)d{X)a,fromwhere we deduce a stochastic dierential calculus. This formula could be summarizedas an order 2 Tav lor formula.P roof: four steps.

    we "localise" to go to a bounded case,

    f 2,

    we study the term inducing stochastic integral,

    finallv the quadatric variation te rm .

    (1) Let be the stopping time

    Tn = 0 si |X0| > n,

    inf{t > 0; |Mt| > n or |At| > n ou (M)t > n}

    and infinitv if not.

    Obviously this sequence of stopp ing times is almost surely increasing to inhnity. TheTn

    (then n goes to inhnity). We thus ran assume that the processes M ,A , (M) and randomX0 X f

    admitting a compact support: f , f ' , f and f(3) are bounded,

    (2) To get this formula, and particularly the stochastic integral term, we cut the in-terval [0, t] as a subdivision n = (ti,i = 1,..., n) and we study the increments off (Xt) onthis subdivision:

    n1

    (13) f X - f (Xo) = J 2 ( f(Xi+1) - f X )) =i=0

    n- 1 n- 1 n- 1f ( X ti)(Mti+1 - M .) + Y , f (Xti)(Ati+1 - A .) + - j - (ji){X t+1 - X . ) 2,

    i=0 i=0 2 i=0

    where ni e [Xt',X t i+1].Obviously the second term converges to Stieltjes integral off ' ( X s) with respect to A.Here, nothing is stochastic.

    (3) Concerning first term, we consider the simple process associated to the subdivisionn :

    = f ' (Xt') si s ]ti, ti+i].

    Then this first term, by dehnition , is equal to /0 YsndMs. But

    r t n-1 [ ti+1

    ^ 7 - f '(Xs) |2d(M)s = / f { X t, ) - f ' (Xs)|2d(M)s.0 i=0 ti

    32

  • 7/31/2019 SquVietna2

    33/65

    The application s ^ f ' ( Xs) being continuous, the integrand above converges almost surelyto zero. The fact that f ' t bounded and bounded convergence Theorem prove that Ysnconverges to f ' ( Xs) in L 2(dP X d ( M}): bv dehnition, the rst term converges in L 2 to thestochastic integral

    f ' (Xs)dMs.0

    (4) Quadratic variation term: we decompose it in t hree terms:

    n- 1 n- 1

    (14) , r ( r i i ) ( X t,+i - X t, ) 2 = J 2 f ' (m)(Mt,+i - Mti ) 2i=0 i=0

    n- 1 n- 1

    + f (vi)(Mti+1 - Mti ) (Ati+i - A ti) f (vi )(Ati+1 - A ti ) 2i=0 i=0

    The last term is bounded bv IIf IIsupi |AiA| n- 0 IAiAI, bv hvpothesis \\f II and | A ^ |are bounded; supiA A goes to zero almost surely since A is continuous.

    The second term is bounded by IIf IIsupiA iM IY^n- 0 |AiA| which similarly convergesM

    The first term of (14) is near to be

    n- 1

    f '(Xti )(Mti+i - Mti )2.i=0

    Indeed:

    n- 1 n- 1

    , ( r ( r i ) - f ' (Xt , ) ) ( Ai M ) 2 < sup \ r ( m ) - f ' ( Xt i )l (Ai M) 2i=0 i i=0

    where supi l f(ni) - f '' (X t) Igoes almost surely to zero using f continuity, and (AiM)2goes to {M}t, by dehnition , in probability so t here exists a subsequence which converges

    L 2

    orem. It remains to studv n- 1

    , f ' ' (Xt i)(Mt i+i - Mt i ) 2i=0

    to be compared to J2 n- 0 f '' (X t )({M}t-+1 - ( M}t.). Its limit in L 2 s /0 f'' ( Xs) d{ M}s since- by continuity the simple process t ^ f ' ( Xt.) if t e]ti,ti+1] converges almost surely tof '' (Xs);- the bounded convergence Theorem concludes.

    Let be the dierence:

    n- 1

    f ~(Xii)[(Mti+i - Mti ) 2 - ((M} U+1 - ( M)H)],i=0

    33

  • 7/31/2019 SquVietna2

    34/65

    we studv its limit in L2; look at the expectation of rectangular terms:

    i < k : E[f (Xtt ) f ' ( X tl ) ( AiM2 - (M ) ^ 1)(AkM2 - (M );+')].

    Applying T. conditional expectation, we conclude that these term s are null since M 2 -(M)

    E[(f(Xti))2(AiM 2 - (M )'+ ')2] < 2 | | f ||L [E (A iM 4) + E (((M) '+ ')2)]i i

    < 2 | |f| |^ E [(sup A i M ^ A iM 2) + su p((M ) ti+1)(M)t].i ii

    In the bound, supi AiM 2 and supi((M)ti+1) are bounded and converge almost surely

    i A i M2

    (M)tbounded convergence T heorem, globally it converges to zero L 1, at least for a subsequence.

    As a conclusion , the sequence of sums (13) converges in probability to the result of The-orem; we conclude thanks to the alm ost sure convergence of a subsequence.

    3.1.1 Extension an d applications

    We can extend this result to functions of vectorial sem i-martingales depending also ontime.

    M d Ad X0variable, F0 -meM,surnhe. Let be f G C1,2(R+,R d). Set X t= X 0+ Mt+ A t . Then, P almostsurey:

    f ( t ,Xt ) = f ( 0 , X o ) + [ t dt f ( s,Xs )ds + / dif (s, X s ) dMis + / *di f (8,Xs )dsJo Jo Jo

    1 t+5 $H* ,X,)d(Mi,Mi),

    2 Jo ij

    Proof: to write it as a problem.

    f M stochastic integral term above is a "true" martingale, null in t = 0 and vields:

    f ( t , X t) - f ( 0 , X o ) - f dt f ( s , x s) d s - f dt f { s , X s) d A l - \ f d f { s , X s) d ( M \ M 3 ) s e M Jo Jo 2Jo

    A = 0 X = M

    f ( t , Xt) - f (0,X o) - [ L f ( s , Xs)ds MJo

    34

  • 7/31/2019 SquVietna2

    35/65

    where tho ierential oporator = dt +

    From Ito ormula we can euee tho solution of tho so-callod hoat oquation, meaningthe SPDE:

    / 1,2(R+ ,R d), 8 ,j = , \ d t t .x ) = V**)i

    where tp C2 (Rd) and the unique solution is

    f (t,x) = E [ (x + Bt)\.

    Wo oasilv chock that this unction is actually solution applying Ito ormula; tho uniquo-noss is a littlo bit more di cult to chock.

    For tho follow ing corollary, wo sot tho following notation-dohnition:

    Den t on 3.6. I f X is the cont inuous rea semi-mart inga e X0 + M + A, denote (X) (which is actually ( M }). Similarly for two continuous semi-martingales X and Y, denote(X, Y} the bracket of their martingae part.

    Corollary 3.7. Let be two cont inuous rea semi-martingales X and Y; then:

    XsdYs = XtYt - X0Y0 - YsdXs - (.X , Y }t.0 0

    This is the important form.ula, named ntegraton by part formula.

    Proof: Exereise, as a simplo application of Ito ormula.

    35

  • 7/31/2019 SquVietna2

    36/65

    4 Exam ples of stochastic differential equations (SDE)

    Here are other applications of It formula: a great use of Br vvnian motion is to modeladditive noises, measurement error in ordinary dierential equations. For instance let usassume dynamics given by:

    x(t) = a(t)x(t), t G [0 ,T], x(0) = x.

    But it is not exactly this, in addition to the speed there is a little noise, and we modelthe dvnamics as following:

    dXt= a(t)Xtdt+ b(t)dBt, t G [0,T], X o = x,

    called stochastic differential equation. We do not discuss the theory in this course,but we give another example bel vv .

    4.1 St ochast ic exponential

    Let us consider the function C, f : x ^ ex , and a continuous sem i-m ar tingale X,x 0= 0, let us apply It formula to the process Z = exp(Xt (X) t) . Yields:

    Z = 1 + J [exp(Xs -(X)g)(dXs -d(X)g) + -exp(Xs - ( X } s)d(X}s].

    So, after some cancellation :

    />t 1Z = 1 +J exp(Xs - { X) s )d Xs ,

    or using dierential notation :dZs = ZsdX s.

    This is an example of (stochastic) dierential equation . Then there is the following result:

    X Xo = 0.

    contnuous semimartingale vhich is soution of the stochastic differenf,ial equation:

    (15) Zt= 1 + Z sd X so

    vhich is explicitely:

    Z t(X) = e M X t - { X ) t).

    It formula sh vvs th at this processis actually solution of the required equation .Exercise: shou! the uniqueness assuming that there exists two soutions Z and Z ' , then

    apply It formua to the quotient Y =

    36

  • 7/31/2019 SquVietna2

    37/65

    Dent o n 4.2 . Let X be a continuous semimartingale, X0 = 0. The s tochastic ex-ponent a l of X, denoted, as E ( X), is the unique solution of the stochastic differentia equat on (15).

    Example: Let be X = aB where a t a real number and B the Brownian motion ;thon t(aB) = exp(aBt a 2t.), somotimos callod goomotric Brownian motion.

    Here aro somc rosults 011 these stochastic exponentials,

    X YX0 = Y0 = 0.

    E( X) E( Y) = E( X + Y + (X, Y }).

    P reuve: set Ut = Et( X) et Vt = Et(Y) and apply integration by part formula (3.7):

    UtVt - 1 = UsdVs + VsdUs + d(U,V}s 0

    Setting W = UVand using the dierential definition of the stochastic exponential we getthe result.

    Corolla ry 4.4. Let X be a continuous semimart ingae, X0 = 0. Then the inverse t~1(X) =Et ( -X + (X})

    Proof as an Exerc se.

    Let us now considor moro general linoar stochastic diorontial oquations.

    Theorem 4 .5. (cf. [30], th. 52, page 266.) Let Z and H two rea continuous semi-martingales, Z0 = 0. Then the stochastic differential equation:

    Xt = Ht + XsdZs0

    admits the unique soluton

    Eh(Z)t = Et(Z)(H0

    + E -1

    (Z)(dHs - d(H,Z})s).0

    P reuve: we uso tho metho of constant variation. Lot us assumo tha t tho solutionadmits tho form:

    X t = Et (Z )Ct

    and applv Ito ormula:

    dXt = CtdEt(Z) + Et(Z )dCt + d(E (Z ),C}t,

    so, replacing dEt(Z) bv its value and using the particular form of X:

    dXt = XtdZt + Et(Z)[dCt + d(Z, C}t].

    37

  • 7/31/2019 SquVietna2

    38/65

    XdXt

    dHt = E t ( Z ) [ d C t + d(Z, C)t].

    B u t s i n c e s t ( z ) i s a n e x p o n e nt i a l a n d s i n c e (Zt \{Z)t) i s f i n i t e , f 1(Z) e x i s ts a n d

    dCt = E - 1 ( Z )dHt - d(Z, C) t

    s o y i e l d s :

    d(Z,C)t = E - 1 ( Z )d (H, Z )t,

    a n d f i n a l l y :dCt = E - 1 ( Z ) [ d H t - d(H,Z)t].

    W e u s e d t h e c o v a r i a ti o n o f C and Z t th e s a m e a s t h e O n e o f E t ( Z ) - 1 . H and Z.

    4.2 Ornstein-Uhlenbeck equation

    A n o t h e r i m p o r t a nt e x a m p l e u s e d i n F i n a n c e ( f o r i n s ta n c e to m o d e l th e d v n a m i c s o f r a t e )i s t h e o n e o f Ornstein -U h lenbeck e q u a t i o n ( c f , [ 2 0 ] , p a g e 3 5 8 ) :

    d Xt = a(t)Xtdt + b(t)dBt, t [ 0 , T ] , X o = x

    w h e r e a and b a r e F - a d a p t e d p r o c e s s e s , a a l m o s t s u r e l y i nt g r a b l e w i t h r e s p e c t t o t ot i m e , b L2(Q X [ 0 , T ] , d P dt). ^ h 6 n a ^ n d b ^ ^ 6 ^ o n s t i t a 6 t p, W 6 g 6 t t h 6 s o l u t i o n :

    Xt = e~at(x + easdB s).o

    Morever i t c a n b e shown:

    m(t) = E(Xt) = m ( 0 ) e - a t2 2

    V(t) = Va r( Xt) =f + ( V ( 0 ) - f ) e - 2at

    p(s , t) = cov(Xs, X t) = [V(0) + ^ ( e 2a(-tAs)- l ) ] e ~ a(-t+s)2a

    4.3 Insight into more general stochast ic different ial equations

    Generally, t h e r e e x i s ts e x i s te n c e ( a n d u n i q u e n e s s ) s u c i e nt c o n d i ti o n s for s o l u t i o n of t h eXt = x

    ( 1 6 ) X tsx = x + b(u,Xu)du + (n ,Xu)dWu,

    fori n s t a

    nc e

    hypotheseso

    nc o e c i e

    nt s c o u l d b e :

    ( i ) c o nt i n u o u s , w i t h s u b l i n e a r i n c r e a s i n g n e s s w i t h r e s p e c t t o s p a c e ,

    38

  • 7/31/2019 SquVietna2

    39/65

    (ii) such that there exists a solution to the equation unique in law, meaning weak solu -tio n :there exists a probability Px on Wiener space (Q, F ) under which

    . X is Tadapted continuous, taking its value in E,

    . if Sn = inf{t : |Xt| > n}, X Sn satishes the existence conditions of strong solutions(meaning trajectorial solutions).

    Sn Px n

    For clarihcation, let us quote the existence Theorem 6 page 194 in [30].

    Theorem 4.6 . Let Z be a semimart ingale with Zo = 0 and et f : R+ X n XR be such

    that

    (i) for fixed x, (t ,u ) ^ f ( t ,x ,u ) is adapted cdg,

    ( ) for each (t,u) , | f( t ,x ,u) - f ( t , y ,u ) l < K(w)|x - y| for some finite randomvariable K.

    Let X o be finite and F o-measurable. Then the equation

    admits a soution. This soution is unique and it is a semimartingae.

    Or Theorem 2.5 page 287 in [20].

    T heorem 4.7. Let the EDS

    dX t= b(t, X t)dt+ ( t, X t)dWt

    such that the coeff,cient b and are locally Lipschitz continuous in the space variable; i.e.for every integer n > 1 there exists a constant K n such that for every t > 0, ||x| < n, and

    Then strong uniqueness hods.

    4 . 4 L i n k w i th p a r t i a l d i f f e r e n ti a l e q u a t i o n s , D i r i c h le t P r o b le m

    (cf. [20] 5.7 pages 363 et sq.)

    DeAnit ion 4.8. Let D be an open subset of R . An ord,er 2 differential operator A =E . j a.., (x ) j x

    XtASn = x + b(u, Xu) du + (u, X u)dWu.

    Ilyll < nIlb(t,x) - b(t,y) | + Il (t,x) - ( t, y )l < K n Ilx - y!-

    2 ai,j(x)tj > 0-

    39

  • 7/31/2019 SquVietna2

    40/65

    I f A is elliptic for any point x D, it is said to be elliptic in D.I f there exists > 0 such that

    V R1, ( x ) U i > llll2,

    it is said to be uni ormly eipt c.

    Dirichlet problem is the One to find a C2 class function u on bounded open subsetD, u(x) = f (x) Vx dD, and satisfying in D:

    Au - ku = - g

    with A elliptic, k C( ) , R+), g C(D, R), f C(dD, R).

    Propost o n 4.9. (Proposit on 7. , paqe 36 20])u (A, D) X A =

    2 S i j i x)^ j + v.& r); Td the exit time of D by X. IfVx E D,

    (17) Ex (Td) < TO

    then Vx D,

    nTd rTd -tu(x) = Ex[f(Xt d)e xp (- k(Xs)ds) + g( Xt )e xp (- k(Xs)ds)dt].

    0 0 0

    P roof Exorciso (problem 7.3 in [20], eorreetion page 393).First let us remark that the continuity of X implies X Td dD.

    Indicaton: prove

    / I*tATo \ rtATd f s \ M : t ^ u (XtATD) exp - J k ( Xs) dsl + J g( Xs) e x p - J k ( Xu)dul ds, t > 0

    is a uniformly integrable martingale with respect to Px: compute Ex(M0) = EX(M^); on{t < Td }, do the It dierential ofM and use on D, Au - ku + g = 0. M0 = u(x) car

    X0 = x under Px,X

    p t A T D

    rtAiDdMt = e xp ( - k(Xs)ds) [Au(XtATD)dt+Vu(XtATD) (t,XtATD)dWt+g(XtATD) (k.u) (XtATD)dt,0

    functions Vu and a are continuous thus bounded on compact D , so the second termabove is a martingale, moreover the other terms cancel sinceAu - ku + g = 0 and for anyt,Ex[Mt] = u(x).

    This martingale is bounded in L 2 so uniformly integrable and we could do t going toinfinity et apply stopping Theorem since Ex[Td ] < TO.

    R e m a r k 4 .10 . (Friedman, 1975) A sufficient condition for hypothesis (17) is: 3l, : aii(x) >

    a > 0. This condition is stronger than eipticity, but weaker than uniorm eipticity in D.

    40

  • 7/31/2019 SquVietna2

    41/65

    b* = max{|b(x)|,x D }, q = mn{xi,x D },

    and choose V > 4b*/a,h(x) = - exp(vx),x D, will be chosen later. Then h is C ^ class and -Ah(x) is computed and bounded:

    - A h ( x ) = { u 2au + ubi{x))ieVXl > (^ - b*)ieVXl > ^ ^ i e vq >1.2 a a a

    Then we choose great enough so that -Ah(x) > 1 ; x D, h and its derivativesD, h

    - tATD r tATdh ( X j D) = h(x)+ / Ah(Xs)ds + / Vh(Xs) (Xs)dWs.

    o o

    Thus yieldsr tATd

    tA Td < h(x) - h ( X D) = - / A h ( X s)dso

    plus a uniformlv integrable martingale, Thus Ex[t A Td] < 2||h||ro and finallv let t goesto inhnity.

    Set:

    4.5 Black and Scholes model

    This model is the One of a stochastic exponential with constant coe cients. We assumethat the risky assets is solution to the SDE

    (18) dSt= Stbdt+ St dWt, So = s,

    b unique solution :

    St = sex_p[ Wt + ( b - 2 (j2)]-

    Let us remark that log St has a Gaussian law.

    Exercise: prove the uniqueness of the soution of (18); you could, use It ormula andapply it to the quotient of two solutions.

    The following dehnitions will be seen with more details in Chapter 7.

    DeAnit ion 4.11 . A strategy is said to be self-fin ancing if Vt( ) = atSto + dtSt =( o,Po) + /o asd Ss + /o dsdSs-

    Moreover it is said to be admiss ible if it is sef-financing and if its value

    Vt( ) = Vo + / s.dSso

    is almost surely bounded belou! by a rea constant.

    41

  • 7/31/2019 SquVietna2

    42/65

    An arb i trage opportun ity is an admissible strategy 9 such that the value v(9) satisfies V0 (9) = 0 and P(VT(9) > 0) > 0.AOA hypot h esis is the non existence of such a strategy.

    We callrisk neutra l probability measu re any probability measure Q which is equiv-aent to p and so that any discounted prices (id est e-rtSt where r is a discount coefficient,

    for instance inflatio rate) are (F,Q)-m,arf, ingales.A market is viable is AOA hypothesis is saf,isfied. A suffi,cient condition is there existsat least One risk neutra probabiity m,easure. market is complete as soon as VX L 1 (Q, FT, p) there exists a strategy 9 whichis stochastiquely integrabe with respect to the prices vector and such that X = E ( X) +

    f T 9tdSt.

    The market under Black and Scholes model is viable, complete, with the unique riskneutral probabilitv measure

    Q = L tp, dLt = - L t - (b - r )dWt, t [0, T],L0 = 1.

    t = 0pays a sum, q which gives the possibi lity to buy at time t = 1 a share to price K but withoutobligation. I f in T, ST > K, he exercises his right and wins (ST - K)+ - q. Otherwise,and if he does not exercise, it will have ost q. Overall, he earns (ST - K) - q+.

    t = 0 qt = 1 K

    T, ST < K, he exercises his right and wins K - ST - q. Otherwise, and if he does notexercise, it will have lost q. Overall, he earns (K - ST)+ - q..

    q,This is the aim ofthe so called Black an d Scholes ormula.To do this, we assume that the hedging portfolio 9, t such that there exists a class (1, 2)

    C

    (19) Vt(9) = C (t, St).

    9 ( a, d)

    (20) Vt(9) = atSt + dtSt = {9 0 P0} + asdSs + [ dsdSs.0 0

    With this selhnancing strategy 9 of option (for instance (ST - K)+) couldhedge the option using initial price q = V0 0 finally have VT(9) = C(T, ST).

    The kev is the two wavs ofcomputing the stochastic dierential of this value and theiridentihcation :

    dVt{9) = dtC(t, St)dt+ dxC(t, St)dSt+ - d 2x2C(t, S t)S 2dt,

    42

  • 7/31/2019 SquVietna2

    43/65

    using (19), then using (20):

    dVt( ) = ratSdt+ dtSt(bdt+ dWt).

    The identihcation gives two equations, and recall (20) which only is C(t, St):

    (21 ) 8,clt, s.) + iSACit. s.) + ^ C(. S)SV = r,s? + 4 S 6S x C (t, S^ St = d t S t .

    Thus we get the hedging portfolio:s

    (22 ) dt= dxC(ttSt) ; atC (t ,s ,) - s , A C ( t , s , )

    s?

    Cusing first equation of (21 ):

    We can replace St bv anv x R+ since it t a lognormal random variable thus with R+as a support. We solve this problem using Feynm an -Kac o rmu la. Set

    dYs = Ys(rds + dWs),Yt = x.

    Then Ys = xexp[(j(Ws x t) (s t) ( 2 + r)] denoted as Yst,x^ and

    C(t,x)= Ex[e-r(T-t)(Y^x - K)+]

    is the expected solution , the portfolio being given by equations (22). The so famous Black-Scholes formula allows an explicit computation of this function , setting y the distributionfunction of Standard Gaussian law:

    a , C( t , z )+ r x a x C( t , x ) + = rC( ,x),

    C (T, x) = (x - K)+, x R+.

    q C(0, x )

    Actually, another way is to sove a ter a change of (variable,function):

    x = ey,y R ; D( t , y) = C(t,ey)

    which aous to go to Dirichet probem:

    dtD(t ,y) + rdyD(t,y) + ^d iD( t , y ) 2 = rD(t ,y) ,y e E,

    43

  • 7/31/2019 SquVietna2

    44/65

    D( T, y) = (ey - K)+,y R,

    associated to the stochastic differentia equation:

    d X s = rds + dWs, s [t,T],Xt = y.

    This is exacty what we saw in Proposition 4 .9, with g = 0, f (x) = (ex - k)+, k(x) = r.Thus

    D(t, y) = Ey [e-r(T-t) (eXT- K)+],

    X T

    The price at time t is C(t, St) = E e[-r(T-t' ) (eXT - K)+/Ft]; this is easy to compute:

    the law ofX Tgi ve n F t t a Gaussian law, wit h mean St+ r(T- 1) variance 2(T - 1).

    44

  • 7/31/2019 SquVietna2

    45/65

    5 Change of probability, Girsanov theorem

    The motivation of this chapter is: martingales and local martingales are powerful tools,and it is therefore worthwhile to m odel reality so that the processes involved are martin-gales, at least locally, Thus, for the application of stochastic calculus to Finance, the dataare a set of processes that model the evolution over time of share price on the hnancialmarket, and One can legitimatelv ask the question :

    is there a hltered probability space (Q, F t, P) on which the price process are all mar-tingales (at least locallv)?

    Specicallv, there exists a probabilitv P which gives the propertv? Hence the twoproblems discussed in this chapter are the following:

    - how to move from a probability space (Q, F , P) to (Q, F ,Q) in a simple way? isdPthere a densitv H ow then are transformed Brownian motion and martingales? Thisis Girsanov theorem, Section 5.1. Section 5.2 gives a su cient condition to apply Girsanovtheorem.

    - Finallv, given a familv of sem i-martingales on hltered probabilitv space (Q, (Ft)),does there exist a probability P such that all these processes are martingales on hltered

    probabilitv space (Q, (Ft), P)? and thats what we call a martingale problem, we will seein Chapter 6 .

    We a priori consider a hltered probability space (Q, (Ft), P) which is defined linked toa d-dimensional Brownian motion B, B 0 = 0. The hltration is generated by the motionBrownian and we note M (P ) the set ^a rtinga les on (Q, (F t), P).

    Recall the notion of local m a r t inga les, their set is denoted as Moc(P) meaningadapted process M such that there a sequence of stopping tim es (Tn) increasing toinhnity and such that Vu the Tn stopped process M Tn is a true martingale.

    5.1 Girsanov theorem

    ([20] 3.5, p 190-196; [30] 3.6, p 108-114)

    Let X be an adapted measurable process in P (B):P( B ) := {X mesurable adap t :VT, JqT II X s

    This set is larger than L(B) = L2( X R+, dP dt).Generally we define for any martingae M the setP ( M )L(M) = L 2( X R+, dP 0 d(M )).-

    rTP (M ) := { X processus mesurabe adapt :VT, IJo

    For such process X , X . M is only a local martingae.

    2 ds < +TO P p.s. }

    which contains

    X s ||2 d(M)s < +TO P p.s.}

    45

  • 7/31/2019 SquVietna2

    46/65

    exponential) as soon as y t /0 II X s II2 ds < +TO p p.s.:Thus we can define the local martingale X . B and its Dolans exponential (stochastic

    0 s

    1

    2 ,(X. B) = exp [ ( X id B

    0 isuus ^ IIx s II2 ds)],

    solution of tho SDE

    (23) dZt = Z t ^ X \d B\ ; Z0 = 1,i

    which is also a local martingale since /0 Z22 IIX s I2 ds < +TO p p.s. bv continuitv of theintegrand on [0,t].

    Under some conditions, E. (X. B) t a true m artingale, then yt , E[Z\ = 1, this allowsa change of probabiliy measure on the a-algebra F t :

    Q = Zt.p m eaning ifA Ft, Q(A) = E p [1aZ].

    Since Zt > 0, both probabilitv m easures are equivalent and P(A) = Eq[Z- 1 1 a].

    T heorem 5 .1. (Girscinov, 1960 ; Cameron-Martn, 1944)I f the process Z = E(X. B) solution of (23) beongs to M( p) , and if Q is the probabilitymeasure defined on FT by ZT.p then:

    B t = B t - X sds, t < T0

    is a Brownian motion on (Q, (Ft)0

  • 7/31/2019 SquVietna2

    47/65

    Pmartingale M , the process N beow is a Q local martingae:

    N = M -0[ y / Xl d ( M, B i),.0 i

    P roof: (Exorciso)

    It yields as a corollary that B is a Q-martingale with bracket t. To prove it is aQwith Gaussian law (or tha t it is a Gaussian procoss).

    Xow we look things in reverse ordor, that is, if thoro oxists equivalent probability moa-suros, to look for a link between martingales rolatod to tho 0110 or tho othor probability,and rolatod to tho samc hltra tion.

    Propos i t ion 5 .4. Let P and Q be two equivaent probability measures on (Q, F ) and theuniormly inteqrabe coninuous mart nqale Z = E[ -p /T }. Then M GM cloc{Q)

  • 7/31/2019 SquVietna2

    48/65

    5.2 Novikov condit ion

    (ef. [20] pagos 198-201).

    The previous subsection is based on the hypothesis tha t the process E(X.B) is a truem artingale. We now look for suScient conditions on X so tha t this hvpothesis will besatisfied. Generallv E(X.B)is at least a local martingale with localising sequence

    Tn = inf{t > 0, II Es(X.B )Xs ||2 ds > n}.J0

    Lemma 5.6 . E(X.B) is an uppermartingae; it is a martingae if and only if:

    Vt > 0 E[E(X.B)] = 1.

    Proof: there exists an increasing sequence of stopping times Tn such that Vn, E(X.B)Tn E

    M(P) thus for anv s < t we getE [Eta(X.B)/Fs] = Eta s (X.B),

    using Fatou lemma, we deduce from this equality going to the limit tha t actually E (X. B) is an uppormartingalo (anv positivo loeal martingalo is an uppormartingalo). SineeE [E0 (X.B)] = 1, it t tha t, Vt > 0, we could have E[Et(X.B)] = 1 to checktha t E(X.B ) t a m artingale.

    Prop oston 5.7. Let M be a cont inuous oca martingae with respect to P and Z = E(M)such that Sexp {M)t\ < oo V > 0. Then V > 0, E [ z t ] = 1.

    X

    1 />tSexp - IIX s II2 ds] < oo pour tout t > 02 J0

    then E (X. B) e M(P).

    To close this subsection , here is an example of process X G P(B) which doesntsatisy Novikov condition , such tha t E(X.B) EMCoc(P) but it is not a true martingale(Exorciso):

    Let be the stopping time T = inf{1 > t > 0, t + B t2 = 1} and

    X t = ~ Y ~ ~ ^ B t l{t 1 and n = 1 (1/-0 ), the process (X.B)n is amartnqale.

    48

  • 7/31/2019 SquVietna2

    49/65

    6 Martingale representation theorem, martingale prob-lem

    (cf. Protter [30], pages 147-157.)The motivation of this chapter is to show that a large enough class ofm artingales couldbe identihed as a stochastic integralX .B . This will allow us to find a common probabilitv

    P Plocal martingales.

    6.1 Representation property

    We here consider m artingales in M 2,c, null at time t = 0, and satisving (M)ro L 1.Then , supt E[Mt2] = supt E[(M)t] = E[ (M )^j < TO. These martingales are uniformlyintegrable, there exists M ^ such that Vt > 0, Mt = E[ M ^/ F t]. Let us denote their setas H 2

    H2 = {M M 2c, Mq = 0, (M )x L1}.

    DeAnit i on 6.1 . A vectoria subspace F ofH is caedstable subspace i f V M F andfor any stopping time T then M TG F.

    Recall following notations:

    L(M ) = {X adapt e L2(QxR+,Pd(M))} ; L*(M) = {X Progressive Pp.s. G L2(R+,d(M))},

    Xconsider such a case.

    T heo rem 6 .2. Let F be a closed vectorial subspace o/HQ. Then the followings are equiv-aent:

    ( i ) i f M F and A G Ft, (M- M t)lA e F, Vt > 0.

    F

    (Ui) if M F and H bounded L* (M) then H. M F.

    (iv) if M F and H L*(M) n L2(dP 0 d(M)), then H.M F.

    Proof: SinceL*b(M) c L*(M)nL 2(dP0 d(M)), implication (iv) ^ (iii) is obvious.

    (iii) ^ (li): it is enough to consider any stopping time T and the process Ht= 1[QT](t).Then

    ( H M ) t = 1 [Q,T](s)dMs = MtAT F,Q

    M T F.

    49

  • 7/31/2019 SquVietna2

    50/65

    (li) ^ (i): 1et t be fixed, A Ft and M F. We build the stopping timeT( u ) = t if u A and inhnitv if not. This t actuallv a stopping time since A Ft .Otherwise, on One hand:

    (M - M t)1A = (M - Mt) if w A, which t equivalent to T( u ) = t

    = 0 ,

    on the other hand:

    M - Mt = (M - M t) if u A,

    = 0 ,

    this means that (M - M t)1A = M - M T. But F t stab le, thus M and M T F, so

    (M - M t)1A F for any t > 0: this is property (i).(i) ^ (iv): let be H P which could be written as:

    H = Hq + H i^ ,,,+ i]i

    where Hi = 1A.,A i Ft.. T hen

    H M = ; 1a.(Mti+1 - Mti) = 5 3 1a.(M - M )tl+1i i

    FH X

    that X . M F, vectorial space. To conclude we ta ke the limits of simple processes sinceP t dense in Lb(M) n L2(dP 0 d(M)) (cf. Proposition 2.10)

    DeAnition 6 .3. Let A be a subset of Ho2. We denote S(A) the smaest stabe closed,vectoria subspace which contains A.

    DeAnition 6 .4 . Let be M and N HQ; M and N are said to be orthogon al ifE[ M ^ N J = 0 strongly or thogon al if M N is a martingale.

    M N - (M, N)M, N = 0.

    orthogonality; the converse is false: let us consider M H2 and Y a Bernoulli randomvariable (values 1 with probabilitv |), independent ofM. Let be N = YM .

    M N

    Let A be a subset of denote A its orthogonal space, A^ its strong orthogonalspace.

    L em m a 6 .5. Let A be a subset of%2, then A^ is stable closed vectorial subspace.

    50

  • 7/31/2019 SquVietna2

    51/65

    P roof: let Mn be a sequence in A } , converging to M in K , and let be N e A: Vn, MnN t a uniformly integrable martingale, On another hand, V > 0, using Cauchy-Schwartz inequality

    E[|(M n - M, N )t|2] < E [ ( M n - M )t]E[{N)t]

    which goes to zero. Thus (Mn , N )t ^ (M, N )t in L2.But Vn and V, (Mn, N )t = 0, thus(M, N)t = 0 and M is orthogonal to N.

    L em m a 6 .6. LetM andN be two martingaes in H0, the fo owin g are equivalent:

    (i) M and N stro ng y orthogonal, denoted, as M N,

    (ii) S(M) N

    (iii S (M) S(N )

    (iv) S(M)N

    (v) S(M)_LS(N)

    Proof: exercise.

    T heorem 6 .7. Le be M1, , Mn e K such thatfo r i = j, Mi Mj . Then,

    S ( M1, , M n) = { ^ H iM i ;H i e L*(Mi) n L2(dP 0 d(M i))}.i= 1

    Proof: let us denote X the right m em ber, By construction and property (iv) , X is astable space. Consider now the application:

    iL*(Mi) n L2(dP 0 d ( Mi)) K

    (H i) I H i .M ii=1

    We easily check t hat this is an isometry, using t hat for i = j, M i Mj :

    n n n

    II H iM i 112= I H iM i 112= E[ / |HS|2d(M i)s].i=1 i=1 i=1 ,' 0

    Thus the set X, im age of a closed set by an isom etry is a closed set so contains S(M 1, , M n).

    Conversely, using Theorem 6.2 (iv), any stable closed set F w hich contains M i containstoo H i.M ^ o I c F

    DeAn it i on 6 .8. Let be A c A is said to have the predictable representat ion property if:

    X = {X = J 2 H iM i , Mi G A, H i G L*(Mi ) n L2(dP0

    d(Mi))} = h2.i=1

    51

  • 7/31/2019 SquVietna2

    52/65

    P ropos i t ion 6.9 . Let be A = (M 1, , M n) c H2 satisfyi ng M i I M j ,i = j. I f forany N HQ strongy orthogonal to A is null, then A has the predictabe representation

    property.

    Proof: Theorem 6.7proves that S (A) t the set I deined above. Then letbe N A t .Using Lemm a 6,6(11),

    N S (A)t = I .

    Hypothesis theorem tells us that N t null, meaning I = {0}, thus I = H2.

    These orthogonalitv and representation properties are related to underlving probabilitvmeasure. So we have to look at what happens after a change of probabilitv measure.

    DeAnit ion 6 .1 0 . Let be A c HQ(P) and denote M ( A ) the set of probability measures onF , absolutely continuous with respect to P, equal to P on FQ, such that A c HKQ).

    L em m a 6 .1 1 . M(A) is convex.

    Proof: exercise.

    DeAnit ion 6 .1 2 . Q M(A) is said to be ex t r em a l if

    Q = aQ 1 + (1 a)Q2, a [0,1], Qi ^M(A) ^ a = 0 0 'U1.

    Theorem 6.13. Let be A c H^(P). S(A) = H 2(P ) yields that P is extremal in M(A).

    P r o o f : cf. Th. 37 page 152 [30].

    We assume that P t not extremal so could be written as aQ 1 + (1 - a)Q2 avec Q, M(A).Probability measure Qi < -P, so admits a densityz with respect to p, such that Z < -and Z. - ZQ H2(P). Remark that P and Q1 coincide on F QimpliesZQ= 1 . t be X A:so it is a P and Q^martingale thus ZX is a P -martingale and also (Z - ZQ)X = (Z - 1)Xis a P-mar tingale; this proves that Z - ZQt orthogonal to anv X , so 10 A, so to S (A).This set being H^(P), Z - 1 = 0 and P = Q 1 is extremal.

    Theorem 6 .14. Let be A c H ^ ( P ) and P extremal in M ( A ) . f M M C ( P ) n A t thenM

    Proof: Let c be a bound of the b o u n d e d m a r t i n g a l e M a n d we a s s u m e M is noti d e n t i c a l l y null. Thus we c a n d e f i n e

    d Q = { l - )d e t d R = { 1 + )dF.

    T h e n p = (Q + R) , Q a n d R a r e absolutely c o nt i n u o u s with r e s p e c t t o p a n d e q u a l p o nF q since M q = 0 . L e t b e X A c H Q ( P ) : u s i n g Proposition 5 . 4 , X H ( Q ) i f a n d o n l y i f (1 Y t ) X t G H ( P ) . B u t X ] M s o a c tu a l l y this property i s t r u e a n d a s well X GThus Q and R, M ( A ) .

    P

    , Mnecessarily null .

    52

  • 7/31/2019 SquVietna2

    53/65

    Theorem 6.15. Let be A = (M 1, , Mn) c H0(P ) satisfyi ng M i Mj , i = j. P isextrema in M (A ) yields that A /rns the predictabe representation property.

    Proof: Proposition 6.9 proves that it is enough to show that any N e K0(P) n A^ isnull. Let N be such a m artingale and a sequence of stopping tim es Tn = inf {t < 0 ; |Nt| >n}. The stopped m ar tingale N T n t bounded and belongs to A^ ; P is extrema l. Theorem6.14 shows th at N Tn is null Vn, so N = 0.

    6.2 Fondamental theorem

    Theorem 6 .16. Let B be a n-dimensional Brouunian mot ion on (Q, F tB, P). ThenVM Gthere exists H i e L(B i), i = 1, , n, suc/i that:

    nMt = Mo + J ] (H \ B %

    i=1

    Proof: exercise.This is an application Theorem 6.15 to the component of Br vvnian motion , we provethat P t the unique element of M (B ). We do as following: let be Q e M (B ) and themartingale z = E[jp/!F] which is a function g ofB since B is a Markov process; B isboth P and Q-martingale; Girsanov theorem implies that ZB t a P martingale, so thebracket (Z, B) = 0 and It formula proves g = 1, meaning P = Q.

    M.dP R 1f/se thatZ = E[^q/ ] is a measurable unction of vector (B , ,B ).

    Corollary 6 .17. Under the same hypotheses, et be Z G L 1( F ^ , P ) ; then there existsH i G L(B i), i = 1 , , n, such that:

    Zn

    E [ Z ] + J ] ( H i . B i ) c

    i = 1

    Proof: app

    ly Theorem 6.16 to the martingale Mt = E[Z/Ft] and d0

    t going to inhn

    ity.

    P Qz the P integrable variable then the martingale z t = E^>[Z/!F] is an exponentialmartingale: there exists a process such that dZt = Zt tdBt.

    Warning! in case of a vectorial martingale M, its components not being strongly orthogonal,the set L(M) contains the set {H = (Hi), Vi Hi L(Mi)} but they arentequai: H L(M)Vt, / i, j HSHSd(Mi, Mj )s < rc.

    53

  • 7/31/2019 SquVietna2

    54/65

    6.3 Martingale problem

    (cf. Jacod [19], pages 337-340).In case of Finance, it is the following problem : let be a set of priceprocesses with dynamicsmodelled by a fam illy of adapted continuous processes on the hltered probability space(Q,B, F t , P), actually semimartingales, Does it exist a probability measure Q such thatthis familly could be a subset of MCoc(Q)? This is a m artingale problem. We assume thatB =

    In this subsection we consider a larger set of martingales:

    H 1(P) = {M MCoc(P) ; sup |Mt| L1}.t

    This dehnition is equivalent to:

    H \ F ) = { M e M U ) ; ( M ) l e L 1}

    using Burkholder inequalitv:

    II sup \M t \\\q < cq\\{M \\q < c q\\sup \M t \\\q.t t

    DeAnit i on 6 .18. Let X be a famiy of adapted continuous processes on (Q, B, F t). Wecall solution of the martingale prob lem related to X any probability P such thatX c M cloc(P). M(X) S (X)

    the smallest stable subset o /H1(P) containing {H.M,H L*(M ) , M X}.

    M(X)

    Proof: exercise.

    We note M e(X) the extremal elements of this set.

    Theorem 6 .20 . (c. th. 11.2 [19] page 338.)P M(X)

    (i) P Me (X )(ii) H 1(P) = S (X u {1}) andFq = (0, n)

    (iii) VN M b(P) n Xt such that (N) is bounded , N = 0 andFQ= (0, Q).

    Corollary 6.21. // Xprocesses, (i) (ii) (Ui) are equivaent to

    (iv){Q M(X), Q - P} = {P}.

    Proof:

    ( ii ) ^ ^ ^ let M be a bounded, MQ= 0 strongly orthogonal to any elem ent of X meaning(M, X ) = 0, VX X .

    54

  • 7/31/2019 SquVietna2

    55/65

    Since bv hvpothesis X u {1} generate the set H 1 ( P ^ ^ y N 6 H1(P) is limit of asequence of processes as N0 + ^ Hi.Xi. Thus,

    (M, N )t = lim Mo No + v (M, Hi.Xi)t = V / Hid(M,Xi)si i Jo

    which is null v on M, which so t orthogonal to anv element of H^P), Moreover, M isbounded so belongs to H1(P), thus it orthogonal to itself thus it is null.

    (iii)^(ii) By the dehnition we get the inclusion S(X u {1}) c H1(P). But let us sup-pose that this inclusion is strict. Since S(X u {1}) t a closed convex subset of H 1(P),there exists M 6H1(P) orthogonal to S(X u {1}). Particularlv M t orthogonal to 1,thus M0 = 0. Let Tn = inf{t/|Mt| > n} be the sequence ofstopping times such that M Tn

    0 X Mand the equality of both sets is satished vrihe.

    (i)^(iii) P is extrem al in M (X ). Let Y be a bounded F 0-measurable random variableand N' a bounded martingale, null in zero, 0 rthogonal to X. Set N = Y E[Y ] + N ' andremark that Vt > 0, EP(N t) = 0. Then set

    a=||iV|U; Z, = l + ;z2= l - x

    Obviously E{Z ) = 1,Z > > 0, so the measures Qi = Z p are equivalent to p proba-P.

    Since Y is F 0-measurable and N' t orthogonal to X VX 6 X, and NX t a P- martin-gale. Thus Z X = X ^ is too a P- martingale. Using Proposition 5.4, X GM oc{Qi)and Qi 6 M (X ); this contradicts that P t extremal unless Nt = 0, Vt > 0 meaning botY = E[Y] and N' = 0. This concludes (iii).

    (iii)^(i) Let us assume that P admits the decomposition in M (X ) : P = aQ 1 + (1 a)Q2.So Q is absolutely continuous with respect to p and the densityz exists, bounded byE[Z] = 1 and since F 0 = (0, Q), Z0 = 1 almost surely: Z 1 is a bounded null in zeromartingale.

    On another hand, VX 6 X , X 6 MCoc(P) n MCoc(Q1) since P and Q 1 6 M(X). Onceagain , Proposition 5.4 proves that ZX 6 MCoc(P) and (Z 1)X 6 MCoc(P) m eaning Z 1is orthogonal to any X and Hypothesis (iii) proves Z 1 = 0, m eaning Q 1 = P which, so,is extremal.

    (iv)^(iii) t proved as (i)^(iii), this proof doesnt need any property to X.

    (ii )^ (iv ) Let us assume tha t the re exists P; = P in M (X ), equivalent to P. In caseX

    H 2(P) = {a + ^ 2 H iX i ; a 6 R, Hi 6 L*(Xi) n L2(dP 0 d(Xi)), X i 6 X}.i=1

    55

  • 7/31/2019 SquVietna2

    56/65

    Let Z be the martingale densitv of P; with respect to P : P; = ZP where Z is a P-martingale, expectation 1 , equal to 1 at zero, Any X of X belongs to MCoc(P) nMCoc(P/),but Proposition 5.4 says that Z X 6 MCoc(P), thus (Z 1)X 6 MCoc(P), m eaning thatZ1 t orthogonal to X so to S (Xu{1}) = H1(P). Localizing, we bound this martingale,the stopped martingale is orthogonal to itself, thus null.

    6.4 Finance application

    The application is twofold: if there exists a probabilitv Q, equivalent to the naturalprobability such that any price processis Q-mar tingale, Q is said risk neutral probability(or martingale measure), then the market is said VIABLE, meaning that there existsno arbitrage (arbitrage is to w in with a strictlv positive probabilitv starting with a null

    initial wealth). RECIPROCAL is false, contrarilv to what i t is too often said or written.Q Q

    martingales, the market is said to be COMPLETE,

    6 .4.1 Research of a risk n eu t ral probability measure

    We assume that the share prices are Si , i = 1,...n , strictly positive semimartingales:

    dSl = Sbidt+ S l _] j ( t )dB .j

    Otherwise look at the equivalent probability Q = E(X.B)P = ZP. Using Girsanov Theo-rem, Vj:

    B = B * X dsJ0

    Q S i Q

    dSi = S M + a' ( t)X)dt+ S a' (t)dB.

    Thus the problem is now to find a vector X in L(B) satisying (for instance) NovikovVi = 1, ... n n d

    b + a (t )X = 0.

    n = d= 1 n = d n = d

    6.4.2 Application: to hedge an option

    In case of a complete market complet, using representation Theorem, we can hedge anoptio n .

    56

  • 7/31/2019 SquVietna2

    57/65

    Remember that an option is a hnancial asset based on a share pricep but it is a rightthat can carrv forward in two wavs :

    - call option with terminal value (STK)+,- put option with terminal value (K ST)+,

    K being t he exercise price of the optio n and T the maturity.Concretelv, at time 0 we buv

    K ST

    K STBut to find the fair price of this contract, the seller of the option could honor thecontract, thus placing the sum obtained bv selling the contract so he can (at least in

    T

    DeAnition 6 .22. We call fa ir price of caim H the smallest x > 0 such that thereexists a self-financing admissible strategy n which attains X n with the discounted pricee-rTX = H, X0n = x.

    Recall: A self-financing strategy n is said to be adm iss ible if its value

    Vt( ) = Vo + / es.ss0

    is almost surely bounded below by a real constant.

    For instance for the call option, the claim is cT= (STK)+, and the seller of thecontract look for hedging. Here are useful the m artingale representation Theorems....If r is the discount (e.g. savings rate), e-rTX Tis the discounted claim . Let us assumethat we are in 6.4.1 scheme with n = d invertible and the market admitting a riskneutral probability measure on F T: Q = ET(X.B)P. Using fondamen