answer-exercises-shreve.pdf

7
Answers for Stochastic Calculus for Finance I; Steven Shreve VJul 15 2009 Marco Cabral [email protected] Department of Mathematics Federal University of Rio de Janeiro July 25, 2009 Chapter 1 -→ 1.1: Since S 1 (H)= uS 0 , S 1 (T )= dS 0 , X 1 (H)=Δ 0 S 0 (u - (1 + r)) and X 1 (T )=Δ 0 S 0 (d - (1 + r)). Therefore, X 1 (H) positive implies X 1 (T ) negative and vice-versa. -→ 1.2: Check that X 1 (H) = 3Δ 0 - 3/0 = -X 1 (T ). Therefore if X 1 (H) is positive, X 1 (T ) is negative and vice-versa. -→ 1.3: By (1.1.6), V 0 = S 0 . -→ 1.4: mutatis-mutandis the proof in Theorem 1.2.2 replacing u by d and H by T . -→ 1.5: many computations. -→ 1.6: We have 1.5 -V 1 0 S 1 +(-Δ 0 S 0 )(1+ r). We determine Δ 0 = -1/2. So we should sell short 1/2 stocks. -→ 1.7: see last exercise. -→ 1.8: (i) v n (s, y)=2/5[v n+1 (2s, y +2s)+ v n+1 (s/2,y + s/2)] (ii) v 0 (4, 4) = 2.375. One can check that v 2 (16, 28) = 8, v 2 (4, 16) = 1.25,v 2 (4, 10) = 0.25 and v 2 (1, 7) = 0. (iii) δ(s, y)=[v n+1 (2s, y +2s) - v n+1 (s/2,y + s/2)]/[2s - s/2]. -→ 1.9: (i) V n (w)=1/(1+r n (w))[ e p n (w)V n+1 (wH)+ e q n (w)V n+1 (wT )] where e p n (w)= (1 + r n (w) - d n (w))/(u n (w) - d n (w)) and e q n (w)=1 - e p n (w). (ii) Δ n (w)=[V n+1 (wH) - V n+1 (wT )]/[S n+1 (wH) - S n+1 (wT )] (iii) e p = e q =1/2. V 0 =9.375. One can check that V 2 (HH) = 21.25, V 2 (HT )= V 2 (TH)=7.5 and V 2 (TT )=1.25. 1

Upload: espacotempo

Post on 01-Dec-2015

23 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: answer-exercises-Shreve.pdf

Answers for Stochastic Calculus for Finance I;

Steven Shreve VJul 15 2009

Marco [email protected]

Department of MathematicsFederal University of Rio de Janeiro

July 25, 2009

Chapter 1

−→ 1.1: Since S1(H) = uS0, S1(T ) = dS0, X1(H) = ∆0S0(u − (1 + r)) andX1(T ) = ∆0S0(d− (1 + r)). Therefore, X1(H) positive implies X1(T ) negativeand vice-versa.

−→ 1.2: Check that X1(H) = 3∆0 − 3/2Γ0 = −X1(T ). Therefore if X1(H) ispositive, X1(T ) is negative and vice-versa.

−→ 1.3: By (1.1.6), V0 = S0.

−→ 1.4: mutatis-mutandis the proof in Theorem 1.2.2 replacing u by d and Hby T .

−→ 1.5: many computations.

−→ 1.6: We have 1.5 −V1 = ∆0S1 +(−∆0S0)(1+r). We determine ∆0 = −1/2.So we should sell short 1/2 stocks.

−→ 1.7: see last exercise.

−→ 1.8:(i) vn(s, y) = 2/5[vn+1(2s, y + 2s) + vn+1(s/2, y + s/2)](ii) v0(4, 4) = 2.375. One can check that v2(16, 28) = 8, v2(4, 16) =

1.25, v2(4, 10) = 0.25 and v2(1, 7) = 0.(iii) δ(s, y) = [vn+1(2s, y + 2s)− vn+1(s/2, y + s/2)]/[2s− s/2].

−→ 1.9:(i) Vn(w) = 1/(1+rn(w))[pn(w)Vn+1(wH)+qn(w)Vn+1(wT )] where pn(w) =

(1 + rn(w)− dn(w))/(un(w)− dn(w)) and qn(w) = 1− pn(w).(ii) ∆n(w) = [Vn+1(wH)− Vn+1(wT )]/[Sn+1(wH)− Sn+1(wT )](iii) p = q = 1/2. V0 = 9.375. One can check that V2(HH) = 21.25,

V2(HT ) = V2(TH) = 7.5 and V2(TT ) = 1.25.

1

Page 2: answer-exercises-Shreve.pdf

Chapter 2

−→ 2.1:(i) A and A are disjoint and their union is the whole space. Therefore

P (A ∪A[= whole space]) = 1 = P (A) + P (A) [they are disjoint].(ii) it is enough to show it for two events A and B. By induction it follows for

any finite set of events. Since A ∪B = A ∪ (B/A), and this is a disjoint union,P (A ∪B) = P (A) + P (B/A). Since B/A is a subset of B, P (B/A) ≤ P (B) bydefinition (2.1.5).

−→ 2.2:(i) P (S3 = 32) = P (S3 = 0.5) = (1/2)3 = 1/8; P (S3 = 2) = P (S3 = 8) =

3(1/2)3 = 3/8.(ii) ES1 = (1 + r)S0 = 4(1 + r), ES2 = (1 + r)2S0 = 4(1 + r)2, ES3 = (1 +

r)3S0 = 4(1 + r)3, Average rate of growth is 1 + r (see p.40, second paragraph).(iii) P (S3 = 32) = (2/3)3 = 8/27, P (S3 = 8) = 4/9, P (S3 = 2) =

2/9, P (S3 = 0.5) = (1/3)3 = 1/27We can compute ES1 directly or use p.38, second paragraph: En[Sn+1] =

(3/2)Sn. Therefore, E0[S1] = E[S1] = (3/2)S0, E1[S2] = (3/2)S1. Applying Eto both sides, E[E1[S2]] = E[S2] = (3/2)E[S1] = (3/2)2S0. Following the samereasoning, E[S3] = (3/2)3S0. Therefore, the average rate of growth is 3/2.

−→ 2.3: Use Jensen’s inequality and the martingale property: φ(Mn) = φ(En[Mn+1]) ≤En[φ(Mn+1)].

−→ 2.4:(i) En(Mn+1) =

∑n+1j=1 En[Xj ] = (taking out what is known) =En[Xn+1]+∑n

j=1Xj = En[Xn+1] +Mn. Since Xj assumes 1 or −1 with equal probability,and depends only on the n+1 coin toss (independence), En[Xn+1] = E[Xn+1] =0. Therefore, En(Mn+1) = Mn.

(ii) SinceXn+1 depends only on the n+1 coin toss (independence), En(eσXn+1) =E(eσXn+1) = (eσ + e−σ)/2. En[eσMn+1 ] = (taking out what is known) =eσMnEn[eσXn+1 ] = eσMn(eσ + e−σ)/2.

−→ 2.5:(i) Hint: Mn+1 = Mn+Xn+1 (why?) and (Xj)2 = 1 and therefore, M2

n+1 =M2n + 2MnXn+1 + 1. Also In+1 = Mn(Mn+1 −Mn) + In = MnXn+1 + In. One

can prove by induction on n since, by induction hypotheses, In = 1/2(M2n − n)

and therefore, In+1 = 1/2(M2n + 2MnXn+1 − n) = 1/2(M2

n+1 − 1− n).(ii) from (i), In+1 = 1/2(Mn+Xn+1)2−(n+1)/2. Since Xn+1 = 1 or −1 with

same probability, En(f(In+1)) = j(Mn), where j(m) = E[f(1/2(m+Xn+1)2 −(n + 1)/2)] = 1/2f(1/2(m+ 1)2 − (n+ 1)/2) + f(1/2(m− 1)2 − (n+ 1)/2) =1/2f(1/2(m2 − n) +m) + f(1/2(m2 − n)−m)

Since In = 1/2(M2n−n), En(f(In+1)) = j(Mn) = 1/2f(In +Mn) + f(In −Mn)

Now we need to make the rhs to depend on In only. Since In = 1/2(M2n − n),

M2n = ±

√2In + n.

So En(f(In+1)) = g(In) where g(i) = 1/2f(i+√

2i+ n) + f(i−√

2i+ n).

−→ 2.6: It is easy to show that In is an adapted process. En[In+1] =(takingout what is known) = ∆n(En(Mn+1) −Mn) + In. Since Mn is a martingale,En(Mn+1) = Mn and the first term is zero.

2

Page 3: answer-exercises-Shreve.pdf

−→ 2.7

−→ 2.8(i) M ′N−1 = E(M ′N−1) = EN−1(MN ) = MN−1. Therefore M ′N−1 = MN−1.

Proceed by induction.(ii) En[Vn+1](w) = pVn+1(wH)+qVn+1(wT ) = (1+r)Vn(w) from algorithm

1.2.16. Now is is easy to prove.(iii) this is a consequence of the fact that En[Z], for any Z, is a martingale.

−→ 2.9 see p. 46 2.4.1: “models with random interest rates, it would ...”(i) P (HH) = 1/4, P (HT ) = 1/4, P (TH) = 1/12, P (TT ) = 5/12.(ii) V2(HH) = 5, V2(HT ) = 1, V2(TH) = 1, V2(TT ) = 0, V1(H) = 1/9,

V1(T ) = 12/5, V0 = 226/225.(iii) ∆0 = 103/270.(iv) ∆1(H) = 1.

−→ 2.10(i) Follow the proof of 2.4.5 (p. 40)(ii) easy(iii) easy

−→ 2.11(i) easy since CN = (SN −K)+, FN = SN −K and PN = (K − SN )+.(ii) trivial(iii) F0 = 1/(1 + r)N E(FN ) = 1/(1 + r)N E(SN −K). Since SN = S0(1 +

r)N , F0 = S0 −K/(1 + r)N .(vi) Yes, Cn = Pn.

−→ 2.12

−→ 2.13(i) we write (Sn+1, Yn+1) = (Sn+1

SnSn, Yn + Sn+1

SnSn). Now we apply the

independence lemma to variables: Sn+1/Sn, Sn and Yn.We obtain:En(f(Sn+1, Yn+1)) = pf(uSn, Yn + uSn) + qf(dSn, Yn + dSn). Therefore,

g(s, y) = pf(us, y + us) + qf(ds, y + ds).

(ii) vN (s, y) = f(y/(N+1)). Vn = vn(Sn, Yn) =1rEn(Vn+1) =

1rEn(vn+1(Sn+1, Yn+1)) =

1r

(pvn+1(uSn, Yn + uSn) + qvn+1(dSn, Yn + dSn)).

So, vn(s, y) =1r

[pvn+1(us, y + us) + qvn+1(ds, y + ds)].

−→ 2.14(i) see 2.13 (i)

(ii) vN (s, y) = f( yN−M ). For 0 ≤ n < M, vn(s) =

1r

(pvn+1(us) + qvn+1(ds))

For n > M , vn(s, y) =1r

[pvn+1(us, y + us) + qvn+1(ds, y + ds)].

For n = M , vM (s) =1r

[pvn+1(us, us) + qvn+1(ds, ds)]

3

Page 4: answer-exercises-Shreve.pdf

Chapter 3

−→ 3.1(i) Since P > 0, Z > 0. Therefore 1

Z(w) > 0 for every w.

(ii) E 1Z =

∑1

Z(w) P (w) =∑P (w) = 1 replacing Z by its definition.

(iii) EY =∑Y P =

∑Y (w)P (w)/Z(w) = E( 1

ZY )

−→ 3.2(i) P (Ω) =

∑Z(w)P (w) = EZ = 1.

(ii) EY =∑Y P =

∑Y ZP = E(Y Z)

(iii) Since P (A) = 0, P (w) = 0 for every w ∈ A. Now P (A) =∑w∈A P (w) =∑

w∈A Z(w) P (w) = 0 since P (w) = 0 for every w ∈ A.(iv) If P (A) =

∑w∈A P (w) =

∑w∈A Z(w)P (w) = 0. Since P (Z > 0) = 1,

Z(w) > 0 for every w. Therefore the sum∑w∈A Z(w)P (w) can be zero iff 1

P (w) = 0 for every w ∈ A. Therefore, P(A)=0.(v) P (A) = 1 iff P (A) = 1 − P (A) = 0 iff P (A) = 0 = 1 − P (A) iff

P (A) = 1.(vi) Let Ω = a, b, Z(a) = 2, Z(b) = 0. Let P (a) = P (b) = 1/2. Now

P (a) = 1, P (b) = 0.

−→ 3.3 M0 = 13.5, M1(H) = 18, M1(T ) = 4.5, M2(HH) = 24, M2(HT ) =6, M2(TH) = 6, M2(TT ) = 1.5

Now, En[Mn+1] = En[En+1[Mn]]. From the properties of conditional expec-tation, this is equal to En[Mn] = Mn.

−→ 3.5(i) Z(HH) = 9/16, Z(HT ) = 9/8, Z(TH) = 9/24, Z(TT ) = 45/12.(ii) Z1(H) = 3/4, Z1(T ) = 3/2, Z0 = 1.

Chapter 4

−→ 4.1(ii) V C0 = 320/125 = 2.56, V2(HH) = 64/5 = 12.8, V2(HT ) = V2(TH) =

8/5 = 1.6, V2(TT ) = 0, V1(H) = 144/25 = 5.76, V1(T ) = 16/25 = 0.64.

−→ 4.2 ∆0 = (0.4 − 3)/6, ∆1(H) = −1/12, ∆1(T ) = −1, C0 = 0, C1(H) =0, C1(T ) = 1.

−→ 4.3 The time-zero price is V0 = 0.4 and the optimal stopping time isτ∗(H · · ·) = +∞ and τ∗(T · · ·) = 1.

Moreover, V1(H) = V2(HH) = V2(HT ) = 0, V1(T ) = 1 = max(1, 14/15),V2(TH) = 2/3 = max(2/3, 4/10), V2(TT ) = 5/3 = max(5/3, 31/20), V3(TTH) =7/4 = 1.75, V3(TTT ) = 8.5/4 = 2.125

−→ 4.4 No. With 1.36 we can make a hedge such that we can pay Y for anyoutcame and we will have some spare cash. In this case he will not exerciseat optimal times. If we have HH or HT we will keep 0.36 at time one withprobability 1/2. If we have TT we get nothing and if we have TH we keep 1 attime 1 with probability 1/4.

1iff means if, and only if.

4

Page 5: answer-exercises-Shreve.pdf

Therefore at time zero we would have 0.35(1/2)+4/5(1/4)1 = 0.38 = 1.74−1.36.

−→ 4.5 List of 11 stopping times that never exercises when the option is out ofmoney:

t(HH) t(HT) t(TH) t(TT) value0 0 0 0 1inf 2 1 1 1.36inf 2 2 inf 0.32inf 2 inf 2 0.8inf 2 2 2 0.96inf 2 inf inf 0.16

inf inf 1 1 1.2inf inf 2 inf 0.16inf inf inf 2 0.64inf inf 2 2 0.8inf inf inf inf 0t(HH) t(HT) t(TH) t(TT) value

−→ 4.6(i) Since Sn/(1 + r)n is a martingale, Sn∧τ/(1 + r)n∧τ is a martingale by

theorem 4.3.2.Therefore, E[S0∧τ/(1 + r)0∧τ ] = E[SN∧τ/(1 + r)N∧τ ]. Now the first term

is equal to E[S0/(1 + r)0] = S0. The last term is equal to E[Sτ/(1 + r)τ ] sinceτ ≤ N .

Therefore, for any τ , E[Gτ/(1 + r)τ ] = E[(K − Sτ )/(1 + r)τ ] = E[K/(1 +r)τ ] − E[Sτ/(1 + r)τ ]. Since the discounted price is a martingale, this is equalto E[K/(1 + r)τ ] − S0. Since (1 + r) ≥ 1, the maximum for this term is whenτ(w) = 0 for every w. In this case, the first term will be K. Therefore the valueis K − S0.

−→ 4.7 Using an argument similar to 4.6 (i), taking τ(w) = N for every w, thevalue will be S0−K

(1+r)N .

Chapter 5

−→ 5.1(i) We need the following property: If X and Y are independent r.v. then

E(XY ) = E(X)E(Y ).Since τ2 − τ1 and τ1 are independent,E(ατ2) = E(ατ2−τ1+τ1) = E(ατ2−τ1ατ1) = E(ατ2−τ1)E(ατ1) = (E(ατ1))2

(ii) Write τm = (τm − τm−1) + (τm−1 − τm−2) + · · ·+ (τ2 − τ1) + τ1. Usingthe same argument as in (i) we obtain the result.

(iii) Yes since the probability of rising from level 0 to level 1 is the same asrising from level 1 to level 2. This is different though from the probability to goto level −1.

−→ 5.2

5

Page 6: answer-exercises-Shreve.pdf

(i) We can write f(τ) = p(eτ−e−τ )+e−τ . Since p > 1/2 and (eτ−e−τ ) > 0,f(τ) > 1/2(eτ − e−τ ) + e−τ = cosh(τ) > 1

Another proof is determining τ0 such that f ′(τ0) = 0. This have only onesolution τ0 = log(q/p)/2. Since q/p < 1, τ0 < 0. Also f ′(0) = p − q > 0.Therefore f is strictly increasing for τ ≥ 0. Since f(0) = 1, f(τ) > 1 for everyτ > 0.

(ii) En[Sn+1] = SnEn[eσXn+1 ]/f(σ) = SnE[eσXn+1 ]/f(σ) = Snf(σ)/f(σ) =Sn

(iii) Follow the same argument as in pages 121 and 122.(iv) (see p.123) Given α ∈ (0, 1) we need to solve for σ > 0 which satisfies

α = 1/f(σ). This is the same as to solve αpeσ + α(1 − p)e−σ = 1. This is a

quadratic equation for e−σ and we obtain that e−σ = 1−√

1−4α2p(1−p)2α(1−p) . Since

1−√· · · < 2α(1− p) iff 1− 2α(1− p) <

√· · · iff α < 1, we have that e−σ < 1.

We conclude that σ > 0.Therefore, Eατ1 = 1−

√1−4α2p(1−p)2α(1−p)

(v) Follow corollary 5.2 (p.124): Let θ =√

1− 4α2p(1− p) E[τ1ατ1 ] =∂∂α

1−θ2α(1−p) = 1

2α21−θ

(1−p)θ (I used a CAS=computer algebra system: Maxima) Now

if we let α goes to 1 then θ goes to 2p−1 (since√

1− 4p(1− p) =√

(2p− 1)2 =|2p − 1|, since p > 1/2, this is equal to 2p − 1). Therefore we obtain thatEτ1 = 1

2p−1 .

−→ 5.3(i) σ0 = log(q/p) which is greater that 0 since q > p.(ii) Follow the steps in p.121 and p.122 but replacing 2/(eσ+e−σ) by 1/f(σ).Now the equation 5.2.11 from p.122 (with the replacement above) is true for

σ > σ0. Here we cannot let σ goes to zero. Instead we let σ goes to σ0. Sincefrom (i) eσ0 = q/p, we obtain the answer: p/q.

(iii) We can write Eατ1 = E(Iτ1=∞ατ1)+E(Iτ1<∞α

τ1). Since 0 < α < 1,ατ1 = 0 for the first term. Therefore, Eατ1 = E(Iτ1<∞α

τ1).

Following exercise 5.2 (iv): Eατ1 = 1−√

1−4α2p(1−p)2α(1−p) (note that Eατ1 =

E(Iτ1<∞ατ1) + E(Iτ1=∞α

τ1).(iv) Follow exercise 5.2 (v): Let θ =

√1− 4α2p(1− p) E[Iτ1<∞τ1α

τ1 ] =∂∂α

1−θ2α(1−p) = 1

2α21−θ

(1−p)θ (I used a CAS=computer algebra system: Maxima) Now

if we let α goes to 1 then θ goes to 1−2p (since√

1− 4p(1− p) =√

(2p− 1)2 =|2p − 1|, since p < 1/2, this is equal to 1 − 2p). Therefore we obtain thatE[Iτ1<∞τ1] = p

(1−p)(1−2p) .

−→ 5.4(ii) P (τ2 = 2k) = P (τ2 ≤ 2k)− P (τ2 ≤ 2k − 2)By the reflection principle, P (τ2 ≤ 2k) = P (M2k = 2)+2P (M2k ≥ 4). Since

the random walk is symmetric, 2P (M2k ≥ 4) = P (M2k ≥ 4) + P (M2k ≤ −4).Therefore, P (τ2 ≤ 2k) = P (M2k = 2) + P (M2k ≥ 4) + P (M2k ≤ −4). This

is equal to P (τ2 ≤ 2k) = 1− P (M2k = 0)− P (M2k = −2).Replacing k by k−1, P (τ2 ≤ 2k−2) = 1−P (M2k−2 = 0)−P (M2k−2 = −2).Therefore,P (τ2 = 2k) = P (M2k−2 = 0) + P (M2k−2 = −2) − P (M2k = 0) − P (M2k =

−2).

6

Page 7: answer-exercises-Shreve.pdf

The complete formula can be written since P (Mm = 0) = m!((m/2)!)2

(12

)mand P (Mm = −2) = m!

(m/2−1)!(m/2+1)!

(12

)m and

−→ 5.5(ii) replace

(12

)n in (i) by pn−b

2 +mqn+b2 −m.

7