[springer finance] contract theory in continuous-time models || adverse selection

17
Chapter 8 Adverse Selection Abstract The continuous-time adverse selection problems we consider can be transformed into calculus of variations problems on choosing the optimal expected utility for the agent. When the cost is quadratic, the optimal contract is typically a nonlinear function of the final output value and it may also depend on the under- lying source of risk. With risk-neutral agent and principal, a range of lower type agents gets non-incentive cash contracts. As the cost of the effort gets higher, the non-incentive range gets wider, and only the highest type agents get informational rent. The rent gets smaller with higher values of cost, as do the incentives. 8.1 The Model and the PA Problem We adopt the following variation of the hidden action model (5.4): dX t = (u t v t + θ)dt + v t dB u+ ¯ θ t where ¯ θ t := θ v t . (8.1) Here, θ is the skill parameter of the agent. For example, it can be interpreted as the return that the agent can achieve with zero effort. We assume here that u is the effort chosen by the agent and that the process v is fixed. Even if v was an action to be chosen, since the output process X is observed continuously, v is also observed as its quadratic variation process, and thus the principal can tell the agent which v to use. We discuss this case in a latter section. The agent to be hired by the principal is of type θ ∈[θ L H ], where θ L H are known to the principal. The principal does not know θ , but has a prior distribution F on [θ L H ], while the agent knows the value of θ . The principal offers a menu of lump-sum contract payoffs C T (θ), to be delivered at time T , and agent θ can choose payoff C T ), where θ may or may not be equal to his true type θ . The agent’s problem is defined to be R(θ) := sup θ ∈[θ L H ] R ( θ,θ ) := sup θ ∈[θ L H ] sup u E u+ ¯ θ U A ( C T ( θ )) G T (u; θ) , (8.2) where U A is the agent’s utility function and G T (u,θ) is the cost of effort. There is no continuous payment c to the agent. J. Cvitani´ c, J. Zhang, Contract Theory in Continuous-Time Models, Springer Finance, DOI 10.1007/978-3-642-14200-0_8, © Springer-Verlag Berlin Heidelberg 2013 137

Upload: jianfeng

Post on 11-Dec-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Chapter 8Adverse Selection

Abstract The continuous-time adverse selection problems we consider can betransformed into calculus of variations problems on choosing the optimal expectedutility for the agent. When the cost is quadratic, the optimal contract is typically anonlinear function of the final output value and it may also depend on the under-lying source of risk. With risk-neutral agent and principal, a range of lower typeagents gets non-incentive cash contracts. As the cost of the effort gets higher, thenon-incentive range gets wider, and only the highest type agents get informationalrent. The rent gets smaller with higher values of cost, as do the incentives.

8.1 The Model and the PA Problem

We adopt the following variation of the hidden action model (5.4):

dXt = (utvt + θ)dt + vtdBu+θt where θt := θ

vt

. (8.1)

Here, θ is the skill parameter of the agent. For example, it can be interpreted as thereturn that the agent can achieve with zero effort. We assume here that u is the effortchosen by the agent and that the process v is fixed. Even if v was an action to bechosen, since the output process X is observed continuously, v is also observed asits quadratic variation process, and thus the principal can tell the agent which v touse. We discuss this case in a latter section.

The agent to be hired by the principal is of type θ ∈ [θL, θH ], where θL, θH areknown to the principal. The principal does not know θ , but has a prior distributionF on [θL, θH ], while the agent knows the value of θ . The principal offers a menuof lump-sum contract payoffs CT (θ), to be delivered at time T , and agent θ canchoose payoff CT (θ ′), where θ ′ may or may not be equal to his true type θ . Theagent’s problem is defined to be

R(θ) := supθ ′∈[θL,θH ]

R(θ, θ ′) := sup

θ ′∈[θL,θH ]supu

Eu+θ[UA

(CT

(θ ′)) − GT (u; θ)

], (8.2)

where UA is the agent’s utility function and GT (u, θ) is the cost of effort. There isno continuous payment c to the agent.

J. Cvitanic, J. Zhang, Contract Theory in Continuous-Time Models, Springer Finance,DOI 10.1007/978-3-642-14200-0_8, © Springer-Verlag Berlin Heidelberg 2013

137

138 8 Adverse Selection

8.1.1 Constraints Faced by the Principal

First, we assume that the IR constraint for the minimal agent’s utility is

R(θ) ≥ r(θ) (8.3)

where r(θ) is a given function representing the reservation utility of the type θ agent.In other words, agent θ will not work for the principal unless he can attain expectedutility of at least r(θ). For example, it might be natural that r(θ) is increasing in θ ,so that higher type agents require higher minimal utility. The principal knows thefunction r(θ).

Second, by the standard revelation principle of the Principal–Agent theory, wemay restrict ourselves to the truth-telling contracts, that is, to such contracts forwhich the agent θ will choose optimally the contract CT (θ). In other words, we willhave

R(θ) = R(θ, θ), ∀θ ∈ [θL, θH ]. (8.4)

This is because if this was not satisfied by the optimal menu of contracts, the prin-cipal could relabel the contracts in the optimal menu so that the contract meant foragent θ is, indeed, chosen by him.

Third, as usual, we consider only the implementable contracts. That is, such con-tracts for which for any θ , there exists a unique optimal effort of the agent, denotedu(θ), such that

R(θ) = Eu(θ)+θ[UA

(CT (θ)

) − GT

(u(θ), θ

)].

Under these constraints, the principal’s problem is to maximize, over CT (θ) in asuitable admissible set to be defined below, the expression

∫ θH

θL

Eu(θ)+θ[UP

(XT − CT (θ)

)]dF(θ). (8.5)

8.2 Quadratic Cost and Lump-Sum Payment

It is hard to get a handle on the constraint (8.4), and we do not have a comprehensivegeneral theory. Instead, we restrict ourselves to the setting of Sect. 6.2: there is nocontinuous payment c, and we assume quadratic cost of effort

GT (u, θ) = k

2

∫ T

0(usvs)

2ds. (8.6)

We could use the FBSDE approach, as we did in Sect. 6.2.2, and get necessaryconditions for the agent’s problem for a fixed choice of θ ′. However, we opt here toapply the approach of Sect. 6.2.3, and identify alternative sufficient and necessaryconditions for solving the agent’s problem.

8.2 Quadratic Cost and Lump-Sum Payment 139

8.2.1 Technical Assumptions

The approach of Sect. 6.2.3 can be applied under the assumptions that we list next.

Assumption 8.2.1 Assume UA,UP are twice differentiable and

vt ≡ v

is a constant. Consequently, θ := θv

is also a constant.

Assumption 8.2.2 For each θ , the set U(θ) of admissible effort processes u of theagent of type θ is the space of FB -adapted processes u such that

(i) Mu+θ is a P -martingale, or equivalently, Mθ,u is a P θ -martingale, where

Mθ,ut := exp

(∫ t

0usdBθ

s − 1

2

∫ t

0|us |2ds

)= Mu+θ

t

(Mθ

t

)−1.

(ii) It holds that

Eu+θ

[(∫ T

0|ut |2dt

)2

+ Mu+θT

]< ∞ and Eθ

[∣∣Mθ,uT

∣∣2]< ∞. (8.7)

By Remark 10.4.9, we know the above conditions imply that

Eθ[

sup0≤t≤T

∣∣Mθ,ut

∣∣2]

< ∞.

Given a contract CT and θ, θ ′, consider the following BSDE

WA,θ,θ ′t = eκUA(CT (θ ′)) −

∫ T

t

ZA,θ,θ ′s dBθ

s , (8.8)

where

κ := 1

kv2. (8.9)

Assumption 8.2.3 Let A0 denote the set of contracts CT which satisfy:

(i) For any θ ∈ [θL, θH ], CT (θ) is FT -measurable and

E[∣∣UA

(CT (θ)

)∣∣4 + MθT e2κUA(CT (θ))

]< ∞. (8.10)

(ii) Denoting

ut

(θ, θ ′) := Z

A,θ,θ ′t

WA,θ,θ ′t

, (8.11)

we have u(θ, θ ′) ∈ U(θ) and Eθ [∫ T

0 |ut (θ, θ ′)|2dt] < ∞.

(iii) For dF -a.s. θ , CT (θ) is differentiable in θ and {eκUA(CT (θ ′))U ′A(CT (θ ′)) ×

|∂θCT (θ ′)|} is uniformly integrable under P θ , uniformly in θ ′.

140 8 Adverse Selection

(iv) supθ∈[θL,θH ] Eθ [eκUA(CT (θ))|UP (XT − CT (θ))|] < ∞.

Assumption 8.2.4 The admissible set A of the principal consists of contractsCT ∈ A0 which satisfy the IR constraint (8.3) and the revelation principle (8.4).

Under (8.10), clearly BSDE (8.8) is well-posed and WA,θ,θ ′> 0, so that u(θ, θ ′)

is well defined. We note that a direct corollary of Theorem 8.2.5 below is that anyCT ∈A is implementable.

We assume

A = φ.

8.2.2 Solution to the Agent’s Problem

We have the following results, analogous to those in Sect. 6.2.3:

Theorem 8.2.5 Assume Assumptions 8.2.1 and 8.2.2 hold, and CT satisfies As-sumption 8.2.3(i) and (ii).

(i) For any θ, θ ′ ∈ [θL, θH ], the optimal effort u(θ, θ ′) ∈ U(θ) of the agent of typeθ , faced with the contract C(θ ′), is defined as in (8.11), after solving BSDE(8.8), and we have

κR(θ, θ ′) = log

(Eθ

[eκUA(CT (θ ′))]) = log W

A,θ,θ ′0 . (8.12)

(ii) In particular, for a truth-revealing contract CT ∈ A, the optimal effort u(θ) ∈U(θ) for the agent is obtained by solving the BSDE

WA,θt = eκUA(CT (θ)) −

∫ T

t

us(θ)WA,θs dBθ

s , (8.13)

and the agent’s optimal expected utility is given by

κR(θ) = log(Eθ

[eκUA(CT (θ))

]) = log WA,θ0 . (8.14)

(iii) For optimal u(θ, θ ′), the change of measure process Mu(θ,θ ′)+θ satisfies

Mu(θ,θ ′)+θT = e−κR(θ,θ ′)Mθ

T eκUA(CT (θ ′)). (8.15)

Proof (i) For any u ∈ U(θ), denote the agent’s remaining utility:

WA,ut := W

A,u,θ,θ ′t := Eu+θ

t

[UA

(CT

(θ ′)) − k

2

∫ T

t

(usv)2ds

].

By (8.10) and (8.7) we have

8.2 Quadratic Cost and Lump-Sum Payment 141

Eu+θ[∣∣UA

(CT

(θ ′))∣∣2] = E

[Mu+θ

T

∣∣UA

(CT

(θ ′))∣∣2]

≤ 1

2E

[∣∣Mu+θT

∣∣2 + ∣

∣UA

(CT

(θ ′))∣∣4]

= 1

2Eu+θ

[Mu+θ

T

] + 1

2E

[∣∣UA

(CT

(θ ′))∣∣4]

< ∞;

Eu+θ

[(∫ T

0|ut |2

)2]< ∞.

Applying Lemma 10.4.6, there exists ZA,u := ZA,u,θ,θ ′such that

WA,ut = UA

(CT

(θ ′)) − 1

∫ T

t

|us |2ds −∫ T

t

ZA,us dBu+θ

s . (8.16)

Next, denote

WA,θ,θ ′t := 1

κln

(W

A,θ,θ ′t

), Z

A,θ,θ ′t := 1

κ

ZA,θ,θ ′t

WA,θ,θ ′t

= 1

κu(θ, θ ′).

Applying Itô’s formula we have

WA,θ,θ ′t = UA

(CT

(θ ′)) −

∫ T

t

κ

2

∣∣ZA,θ,θ ′s

∣∣2ds −

∫ T

t

ZA,θ,θ ′s dBθ

s

= UA

(CT

(θ ′)) −

∫ T

t

2

∣∣ZA,θ,θ ′s

∣∣2 + usZA,θ,θ ′s

]ds

−∫ T

t

ZA,θ,θ ′s dBu+θ

s . (8.17)

Then,

WA,u0 − W

A,θ,θ ′0 = −

∫ T

0

[1

2κ|us |2 + κ

2

∣∣ZA,θ,θ ′s

∣∣2 − usZA,θ,θ ′s

]ds

−∫ T

0

[ZA,u

s − ZA,θ,θ ′s

]dBu+θ

s

= −∫ T

0

1

[|us |2 + ∣∣us

(θ, θ ′)∣∣2 − usus

(θ, θ ′)]ds

−∫ T

0

[ZA,u

s − ZA,θ,θ ′s

]dBu+θ

s

= −∫ T

0

1

∣∣us − us

(θ, θ ′)∣∣2 −

∫ T

0

[ZA,u

s − ZA,θ,θ ′s

]dBu+θ

s

≤ −∫ T

0

[ZA,u

s − ZA,θ,θ ′s

]dBu+θ

s ,

with the equality holding if and only if u = u(θ, θ ′). Note that

142 8 Adverse Selection

Eu+θ

[(∫ T

0

∣∣ZA,θ,θ ′t

∣∣2dt

) 12]

= Eθ

[M

θ,uT

(∫ T

0

∣∣ut

(θ, θ ′)∣∣2

dt

) 12]

≤ 1

2κEθ

[∣∣Mθ,uT

∣∣2 +∫ T

0

∣∣ut

(θ, θ ′)∣∣2

dt

]< ∞.

Then,

Eu+θ

[∫ T

0

[ZA,u

s − ZA,θ,θ ′s

]dBu+θ

s

]= 0,

and thus

WA,u0 ≤ W

A,θ,θ ′0 .

This implies that u(θ, θ ′) is the agent’s optimal control. Therefore,

R(θ, θ ′) = W

A,θ,θ ′0 = 1

κln W

A,θ,θ ′0 = 1

κln

(Eθ

[eκUA(CT (θ ′))]).

(ii) This is a direct consequence of (i), by setting θ ′ = θ .(iii) Note that

dWA,θ,θ ′t = W

A,θ,θ ′t ut

(θ, θ ′)dBθ

t .

Then, WA,θ,θ ′t = W

A,θ,θ ′0 M

θ,u(θ,θ ′)t , and thus

Mu(θ,θ ′)+θT = M

θ,u(θ,θ ′)T Mθ

T = (W

A,θ,θ ′0

)−1W

A,θ,θ ′T MθT

= e−κR(θ,θ ′)eκUA(CT (θ ′))MθT .

This completes the proof. �

Clearly Assumption 8.2.3(ii) is important for the agent’s problem. We next pro-vide a sufficient condition for it to hold:

Lemma 8.2.6 If CT (θ ′) is bounded, then Assumption 8.2.3(ii) holds.

Proof Since UA is continuous, then UA(CT (θ ′)) is also bounded. Let K > 0 denotea generic constant which may depend on the bound of UA(CT (θ ′)) and K and mayvary from line to line. Then, by BSDE (8.8),

e−K ≤ WA,θ,θ ′t ≤ eK and Et

[∫ T

t

∣∣ZA,θ,θ ′s

∣∣2ds

]≤ K.

This implies that

Et

[∫ T

t

∣∣us

(θ, θ ′)∣∣2

ds

]≤ K.

Applying Lemma 9.6.5 and (9.53) we know that Mθ,u(θ,θ ′) is a P θ -martingale and

8.2 Quadratic Cost and Lump-Sum Payment 143

[(∫ T

0

∣∣ut

(θ, θ ′)∣∣2

dt

)4]< ∞.

Moreover, by the arguments in Theorem 8.2.5(iii), it is clear that

Mu(θ,θ ′)+θT = (

WA,θ,θ ′0

)−1W

A,θ,θ ′T Mθ

T ≤ KMθT .

Then, it is straightforward to check (8.7). �

8.2.3 Principal’s Relaxed Problem

We now have workable expressions (8.12) and (8.14) for the expected utility of thetype θ agent, when declaring type θ ′ and when declaring the true type θ , respec-tively. The approach we will take is standard in PA literature: we will find the firstorder condition for truth-telling and use it as the additional constraint on the menuof contracts to be offered. Eventually, once the problem is solved under such a con-straint, one has to verify that the first order condition is sufficient, that is, one has toverify that the obtained contract is, in fact, truth-telling.

Note that the first order condition for truth-telling is

∂θ ′R(θ, θ ′)∣∣

θ ′=θ= 0.

Under this condition we get

κeκR(θ)R′(θ) = d

dθeκR(θ,θ) = κeκR(θ,θ)

[∂θR(θ, θ) + ∂θ ′R(θ, θ)

]

= κeκR(θ)∂θR(θ, θ).

From this, recalling the definition of Mθ = Mθv , and differentiating the exponen-

tial version of the first equality of (8.12) with respect to θ , we get the first ordercondition for truth-telling:

κeκR(θ)R′(θ) = 1

vEθ

[eκUA(CT (θ))Bθ

T

]. (8.18)

In accordance with the above, and recalling (8.15), the principal’s problem ofmaximizing (8.5) is replaced by a new, relaxed principal’s problem, given by thefollowing

Definition 8.2.7 The relaxed principal’s problem is

supR(·)

∫ θH

θL

supCT (·)∈A0

e−κR(θ)Eθ[eκUA(CT (θ))UP

(XT − CT (θ)

)]dF(θ) (8.19)

under the constraints

R(θ) ≥ r(θ), Eθ[eκUA(CT (θ))

] = eκR(θ),

Eθ[eκUA(CT (θ))Bθ

T

] = vκeκR(θ)R′(θ).(8.20)

144 8 Adverse Selection

Introducing Lagrange multipliers λ and μ, the Lagrangian of the constrainedoptimization problem inside the integral above becomes

VP (θ,R,λ,μ)

:= e−κR(θ) supCT (·)∈A0

Eθ{eκUA(CT (θ))

[UP

(XT − CT (θ)

) − λ(θ) − μ(θ)BθT

]}.

(8.21)

Remark 8.2.8 If we can solve the latter problem over CT (θ) and then find the La-grangian multipliers λ(θ),μ(θ) so that the constraints are satisfied, then the princi-pal’s relaxed problem (8.19) reduces to a deterministic calculus of variation problemover the function R(θ), the agent’s expected utility. In the classical, single-periodadverse selection problem with a risk-neutral principal, a continuum of types, butno moral hazard, it is also possible to reduce the problem to a calculus of variationsproblem, typically over the payment CT (θ). Under the so-called Spence-Mirrleescondition on the agent’s utility function and with a risk-neutral principal, a con-tract CT (θ) is truth-telling if and only if it is a non-decreasing function of θ andthe first order truth-telling condition is satisfied. In our model, where we also havemoral hazard and risk-averse principal, the calculus of variation problem cannot bereduced to the problem over CT (θ), but remains to be a problem over the agent’sutility R(θ). Unfortunately, for a general utility function UA of the agent, we havenot been able to formulate a condition on UA under which we could find necessaryand sufficient conditions on R(θ) to induce truth-telling. Later below, we are able toshow that the first order approach works for a risk-neutral principal and agent, whenthe hazard function of θ is increasing, in agreement with the classical theory.

8.2.4 Properties of the Candidate Optimal Contract

The above problem is very difficult in general. We focus on the special case ofthe risk-neutral principal agent in a later section. Here, we get some qualitativeconclusions, assuming that the solution to the relaxed problem exists, and that it isequal to the solution of the original problem.

The first order condition for the problem (8.21) can be written as

U ′P (XT − CT )

U ′A(CT )

= κ[UP

(XT − CT (θ)

) − λ(θ) − μ(θ)BθT

]. (8.22)

We see that, compared to the moral hazard case (6.50) (or (6.66)), there is anextra term κμ(θ)Bθ

T . The optimal contract is a function of the output value XT

and of the “benchmark” random risk level BθT . Here, with constant volatility v,

we can write BθT = 1

v[XT − x − θT ], and the contract is still a function only of

the final output value XT . If, on the other hand, the volatility were a time-varyingprocess, then the optimal contract would depend on XT and the underlying risk level

8.3 Risk-Neutral Agent and Principal 145

BθT = ∫ T

01vt

[dXt − θdt], and would thus depend on the history of the output X.

Random variable BθT can be interpreted as a benchmark value that the principal

needs to use to distinguish between different agent types.

8.3 Risk-Neutral Agent and Principal

Because the first order condition (8.22) generally leads to a nonlinear equationfor CT , it is hard or impossible to solve our adverse selection problem for mostutility functions. We here discuss the case of linear utility functions and uniformprior on θ . The main results and economic conclusions thereof are contained inTheorem 8.3.1 and the remarks thereafter.

Suppose that

UA(x) = x, UP (x − c) = x − c, Xt = x + vBt , GT (θ) = k

2

∫ T

0u2

t dt

(8.23)

for some positive constants k, x, v. From (8.22) we get a linear relationship betweenthe payoff CT and BT (equivalently, XT )

1 = κ[x + vBT − CT (θ) − λ(θ) − μ(θ)(BT − θT )

].

From this we can write

CT (θ) = a(θ) + b(θ)BT (8.24)

for some deterministic functions a, b. Note that, under P θ , BT has normal distribu-tion with mean θT and variance T . Then, for any constant α,

Eθ[eαBT

] = eαθT + 12 α2T ,

Eθ[Bθ

T eαBT] = αT eαθT + 1

2 α2T , (8.25)

Eθ[BT eαBT

] = [θ + α]T eαθT + 12 α2T .

From this, the last two equations in (8.20) imply

eκa(θ)+κb(θ)θT + 12 κ2b(θ)2T = eκR(θ),

κbT eκa(θ)+κb(θ)θT + 12 κ2b(θ)2T = vκeκR(θ)R′(θ).

(8.26)

We can solve this system, and we get, recalling (8.9) and omitting the argument θ ,

b = v

TR′, a = R − θR′ −

(R′)2

2kT. (8.27)

Substituting into the principal’s relaxed problem (8.19), we see that she needs tomaximize

∫ θH

θL

eκa−κREθ[eκbBT

[x − a + (v − b)BT

]]dF(θ)

146 8 Adverse Selection

which is, using (8.25), (8.26) and (8.27), equal to∫ θH

θL

{x − R(θ) + θR′(θ) + (R′(θ))2

2kT

+(

v − vR′(θ)

T

)(θ

v+ 1

kv2

vR′(θ)

T

)T

}dF(θ). (8.28)

Maximizing this is equivalent to minimizing∫ θH

θL

{R(θ) + 1

2kT

(R′(θ)

)2 − 1

kR′(θ)

}dF(θ) (8.29)

and it has to be done under the constraint

R(θ) ≥ r(θ)

for some given function r(θ). If this function is constant, and the distribution F isuniform, we have the following result:

Theorem 8.3.1 Assume Assumptions 8.2.1, 8.2.2, 8.2.3, and 8.2.4 hold. Assumefurther that (8.23) holds, θ is uniform on [θL, θH ], and the IR lower bound isr(θ) ≡ r0. Then, the principal’s problem (8.19) under the first two constraintsin (8.20) and the revelation principle (8.4) has a unique solution as follows. Denoteθ∗ := max{θH − 1/k, θL}. The optimal choice of agent’s utility R by the principalis given by

R(θ) =

⎧⎪⎨

⎪⎩

r0, θL ≤ θ < θ∗;r0 + kT θ2/2 + T (1 − kθH )θ − kT (θ∗)2/2 − T (1 − kθH )θ∗,

θ∗ ≤ θ ≤ θH

(8.30)

and consequently,

b(θ) ={

0, θL ≤ θ < θ∗;v[1 + k(θ − θH )], θ∗ ≤ θ ≤ θH ; (8.31)

a(θ) =

⎧⎪⎨

⎪⎩

r0, θL ≤ θ < θ∗;r0 − kT θ2 − T (1 − kθH )(θ − θ∗) − T

2k(1 − kθH )2 − kT

2 (θ∗)2,

θ∗ ≤ θ ≤ θH .

(8.32)

The optimal agent’s effort is given by

u(θ) ={

0, θL ≤ θ < θ∗;1v[1/k + θ − θH ], θ∗ ≤ θ ≤ θH .

(8.33)

The optimal contract is linear, of the form

CT (θ) ={

a(θ), θL ≤ θ < θ∗;a(θ) + (1 + kθ − kθH )(XT − x), θ∗ ≤ θ ≤ θH .

(8.34)

8.3 Risk-Neutral Agent and Principal 147

Note that when the agent is risk-neutral this is in agreement with the single-periodcase (2.23).

Remark 8.3.2 (i) If 1/k < θH − θL, a range of lower type agents gets no “rent”above the reservation value r0, the corresponding contract is not incentive as it doesnot depend on X, and the effort u is zero. The higher type agents get utility R(θ)

which is quadratically increasing in their type θ . It can also be computed that theprincipal’s utility is linear in θ . As the cost k gets higher, the non-incentive rangegets wider, and only the highest type agents get informational rent. The rent getssmaller with higher values of cost k, as do the incentives (the slope of CT withrespect to XT ).

(ii) Some analogous results can be obtained for the general distribution F of θ ,that has a density f (θ), using the fact that the solution y to the Euler equation (8.37)in the proof below satisfies:

y(θ) = β + T θ + α

∫ θ

θL

dx

f (x)+ kT

∫ θ

θL

F (x)

f (x)dx (8.35)

for some constants α and β .

Proof of Theorem 8.3.1 We first show that (8.30)–(8.34) solve the relaxed princi-pal’s problem (8.19)–(8.20). Then, in Lemma 8.3.3 below, we check that the truth-telling constraint is indeed satisfied.

First, one can prove straightforwardly that u ∈ U and CT ∈ A. Next, if F hasdensity f , in light of the integrand in (8.29), denote

ϕ(y, y′) :=

[y + 1

2kT

(y′)2 − 1

ky′

]f. (8.36)

Here, y is a function on [θL, θH ] and y′ is its derivative. Then, the Euler ODE forthe calculus of variations problem (8.29), denoting by y the candidate solution, is(see, for example, Kamien and Schwartz 1991)

ϕy = d

dθϕy′

or, in our example,

y′′ = kT + (T − y′)f ′

f. (8.37)

Since θ is uniformly distributed on [θL, θH ], this gives

y(θ) = kT θ2/2 + αθ + β

for some constants α,β . According to the calculus of variations, on every intervalR is either of the same quadratic form as y, or is equal to r0. One possibility is that,for some θL ≤ θ∗ ≤ θH ,

R(θ) ={

r0, θL ≤ θ < θ∗;kT θ2/2 + αθ + β, θ∗ ≤ θ ≤ θH .

(8.38)

148 8 Adverse Selection

In this case, R(θ) is not constrained at θ = θH . By standard results of calculus ofvariations, the free boundary condition is, recalling notation (8.36), 0 = ϕy′(θH ),which implies

T = y′(θH ) (8.39)

from which we get

α = T (1 − kθH ).

Moreover, by the principle of smooth fit, if θL < θ∗ < θH , we need to have

0 = R′(θ∗) = kT θ∗ + α

which gives

θ∗ = θH − 1

k

if 1/k < θH − θL. If 1/k ≥ θH − θL then we can take

θ∗ = θL.

In either case the candidate for the optimal solution is given by (8.30).Another possibility would be

R(θ) ={

kT θ2/2 + αθ + β, θL ≤ θ < θ∗;r0, θ∗ ≤ θ ≤ θH .

(8.40)

In this case the free boundary condition at θ = θL would give α = T (1 − kθL), butthis is incompatible with the smooth fit condition kT θ∗ + α = 0.

The last possibility is that R(θ) = T θ2/2 + αθ + β , everywhere. We would getagain that at the optimum α = T (1 − kθH ), and β would be chosen so that R(θ∗) =r0 at its minimum point θ∗. Doing computations and comparing to the case (8.30),it is readily checked that (8.30) is still optimal.

Now (8.31) and (8.32) follow directly from (8.27), and combining with (8.24)we get (8.34) immediately.

To obtain the agent’s optimal action u(θ), we note that the BSDE (8.13) leads to

WA,θt = Eθ

t

[eκCT (θ)

] = Eθt

[eκ[a(θ)+b(θ)BT ]] = eκa(θ)+κb(θ)Bt Eθ

t

[eκb(θ)(BT −Bt )

].

By the first equation in (8.25) we have

WA,θt = Eθ

t

[eκCT (θ)

] = Eθt

[eκ[a(θ)+b(θ)BT ]]

= eκa(θ)+κb(θ)Bt+κb(θ)θ(T −t)+ 12 |κb(θ)|2(T −t).

This leads to

dWA,θt = W

A,θt κb(θ)dBθ

t ,

and thus u(θ) = κb(θ), which implies (8.33). �

It remains to check that the contract is truth-telling. This follows from the fol-lowing lemma, which is stated for general density f .

8.4 Controlling Volatility 149

Lemma 8.3.3 Let f be the density of F . Consider the hazard function h =f/(1 −F), and assume that h′ > 0. Then, the contract CT = a(θ)+ b(θ)BT , wherea and b are chosen as in (8.27), is truth-telling.

Proof From (8.12), (8.25), and (8.27), it is straightforward to compute

R(θ, θ ′) = logEθ

[eκa(θ ′)+κb(θ ′)BT

] = a(θ ′) + b

(θ ′)θT + κ

2

∣∣b(θ ′)∣∣2

T

= R(θ ′) + R′(θ ′)(θ − θ ′).

We have then

∂θ ′R(θ, θ ′) = R′′(θ ′)(θ − θ ′). (8.41)

Here, either R(θ ′) = r0 or R(θ ′) = y(θ ′) where y is the solution (8.35) to the EulerODE. If θ ′ < θ∗ so that R(θ ′) = r0, then we see that R(θ, θ ′) = r0, which is thelowest the agent can get, so he has no reason to pretend to be of type θ ′. Otherwise,with θ ′ ≥ θ∗ and R = y and omitting the argument θ , note that

R′ = T + α/f + kT F/f ;R′′ = kT − (α + kT F)f ′/f 2.

The free boundary condition (8.39) for y = R is still the same, and gives

α = −kT F(θH ) = −kT .

Notice that this implies

R′′ = kT + kTf ′

f 2(1 − F).

Thus, R′′ > 0 if and only if

f ′(1 − F) > −f 2. (8.42)

This is equivalent to h′ > 0, which is assumed. From (8.41), we see, that undercondition (8.42), R(θ, θ ′) is increasing for θ ′ < θ and decreasing for θ ′ > θ , soθ ′ = θ is the maximum. �

8.4 Controlling Volatility

In this section we allow for the control of the risk, that is, of the diffusion coefficient(volatility) of the output process.

8.4.1 The Model

We now study the model

dXt = θvtdt + vtdBθt = vtdBt , X0 = x (8.43)

150 8 Adverse Selection

where we change the drift to θvt . Here, volatility vt is to be chosen, at no cost.Since v is the quadratic variation process of process X, and since we assume thatX is continuously observed, v can be observed. Thus, we assume that v is in factdictated by the principal. Actually, the optimal contract below will be such that theagent is indifferent between various choices of v.

The main application of this model would be to portfolio management. In thiscase θ is the return rate the manager can attain by his skills (say, by his choice ofrisky assets in which to invest), while v is, up to a linear transformation depend-ing on the standard deviations and correlations of the underlying risky assets, theamount invested in the corresponding portfolio. In other words, the investors (whichhere constitute the principal) only have a prior distribution on the return of the port-folio of the assets the manager will pick to invest into, but they do observe histrading strategy and can estimate exactly the variance-covariance structure. This isconsistent with real-world applications, as it is well known that it is much harder forthe principal to estimate expected return of a portfolio, than to estimate its volatility.

We assume that v is an FB -adapted process such that E∫ T

0 v2t dt < ∞, so that

X is a martingale process under P . We derive some qualitative conclusions in thissection, without providing technical analysis.

Note that there is a “budget constraint” on the output XT , which is the martingaleproperty

E[XT ] = x. (8.44)

We already used the Martingale Representation Theorem in the chapter on risk-sharing, that says that for any FT -measurable random variable YT that satisfiesE[YT ] = x, there exists an admissible volatility process v such that XT = Xv

T = YT .This is what makes the budget constraint (8.44) a constraint on the possible choicesof v.

8.4.2 Main Result: Solving the Relaxed Problem

The agent’s utility, when declaring the true type θ , is denoted

R(θ) := E[Mθ

T UA

(CT (θ)

)](8.45)

and the IR constraint is R(θ) ≥ r(θ). Note that, unlike in Sect. 8.3 where we usedMθ

T , here we use MθT . Denote

IA(x) = (U ′

A

)−1(x), IP (x) = (

U ′P

)−1(x).

Proposition 8.4.1 Suppose that the contract payoff has to satisfy the limited liabil-ity constraint:

CT (θ) ≥ L

8.4 Controlling Volatility 151

for some constant L, and that

Eθ[UA(L)

] ≥ r(θ), θ ∈ [θL, θH ].Then, the optimal payoff CT (θ) for the principal’s relaxed problem, defined in (8.49)below, is given by

CT (θ) = L ∨ IA

( −ν(θ)

MθT (λ(θ) + μ(θ)BT )

)1{λ(θ)+μ(θ)BT <0} + L1{λ(θ)+μ(θ)BT ≥0},

(8.46)

where λ,μ, ν are Lagrange multipliers for the IR constraint, the truth-telling firstorder condition and the budget constraint, respectively. Moreover, volatility v willbe chosen so that

XT = CT (θ) + IP

(ν(θ)

MθT

). (8.47)

Proof From (8.45), similarly as before, we get that the first order truth-telling con-straint is

E[Mθ

T U ′A

(CT (θ)

)∂θCT (θ)

] = 0.

Differentiating (8.45) with respect to θ , we have

E[Mθ

T UA

(CT (θ)

)[BT − θT ] + MθT U ′

A

(CT (θ)

)∂θCT (θ)

] = R′(θ),

which implies that

E[BT Mθ

T UA

(CT (θ)

)] = [R′(θ) + T θR(θ)

]. (8.48)

If we denote by ν the Lagrange multiplier corresponding to the budget con-straint (8.44), the Lagrangian relaxed problem for the principal is then to maximize,over XT ,CT ,

E

[∫ θH

θL

(Mθ

T UP

(XT − CT (θ)

) − ν(θ)XT − MθT UA

(CT (θ)

)

× [λ(θ) + μ(θ)BT

])dF(θ)

]. (8.49)

If we take derivatives with respect to XT inside the expectation and the integral, weget that the optimal XT is obtained from

MθT U ′

P

(XT − CT (θ)

) = ν(θ) (8.50)

which implies (8.47). Substituting this back into the principal’s problem, and notic-ing that

YT (θ) := XT − CT (θ)

is fixed by (8.50), we see that we need to maximize over CT (θ) the expression

E

[∫ θH

θL

(−ν(θ)[YT (θ) + CT (θ)

] − MθT UA

(CT (θ)

)[λ(θ) + μ(θ)BT

])dF(θ)

].

152 8 Adverse Selection

If λ(θ) + μ(θ)BT < 0, the integrand is maximized at CT = IA( −ν

MθT (λ+μBT )

). How-

ever, if λ(θ) + μ(θ)BT ≥ 0, the maximum is attained at the smallest possible valueof CT , namely L. Thus, the optimal CT (θ) is given by (8.46). �

Remark 8.4.2 (i) Note that the optimal contract does not depend on the agent’saction process vt or the output X, but only on his type θ and the underlying (fixed)noise BT . Thus, the agent is indifferent between different choices of action processv given this contract, and he has to be told by the principal which process v to use.

(ii) Notice that we can write

BT = θT + BθT =

∫ T

0

dXt

vt

so that the optimal payoff CT is a function of the volatility weighted average ofthe accumulated output value, which is a sufficient statistic for the unknown pa-rameter θ . We can think of the optimal contract as a function of the underlyingbenchmark risk BT .

8.4.3 Comparison with the First Best

Consider the model

dXt = θvtdt + vtdBt

where everything is observable. We now recall results from Chap. 3 adapted to theconstraint CT (θ) ≥ L. Denoting

Zθt = e−tθ2/2−θBt

we have, at the optimum,

XT − CT (θ) = IP

(ν(θ)Zθ

T

)

CT (θ) = L ∨ IA

(λ(θ)Zθ

T

)

where ν(θ) and λ(θ) are determined so that E[ZθT XT ] = x and the IR constraint is

satisfied.Thus, the optimal contract is of a similar form to the one we obtain for the re-

laxed problem in the adverse selection case, except that in the latter case there is anadditional randomness in determining when the contract is above its lowest possiblelevel L; see (8.46).

In the first best case the ratio of marginal utilities U ′P /U ′

A is constant, if CT > L.In the adverse selection relaxed problem, we have, omitting dependence on θ ,

U ′P (XT − CT )

U ′A(CT )

= −1{CT >L}[λ + μBT ] + 1{CT =L}ν

U ′A(L)Mθ

T

where CT is given in (8.46). Similarly as in the case of controlling the drift, thisratio is random.

8.5 Further Reading 153

In the first best case it is also optimal to offer the contract

CT = XT − IP

(νZθ

T

)(8.51)

and this contract is incentive compatible in the sense that it will induce the agentto implement the first best action process v, without the principal telling him whatto do. This is not the case with adverse selection, in which the agent is given thecontract payoff (8.46) and has to be told which v to use.

8.5 Further Reading

Classical adverse selection models are covered in books Laffont and Marti-mort (2001), Salanie (2005), and Bolton and Dewatripont (2005). Two papers incontinuous-time that have both adverse selection and moral hazard are Sung (2005),analyzing a model in which the principal observes only the initial and the final valueof the underlying process, and Sannikov (2007). Our approach expands slightly onthe models from Cvitanic and Zhang (2007). An extension of the Sannikov (2008)model to adverse selection is analyzed in Cvitanic et al. (2012).

References

Bolton, P., Dewatripont, M.: Contract Theory. MIT Press, Cambridge (2005)Cvitanic, J., Zhang, J.: Optimal compensation with adverse selection and dynamic actions. Math.

Financ. Econ. 1, 21–55 (2007)Cvitanic, J., Wan, X., Yang, H.: Dynamics of contract design with screening. Manag. Sci. (2012,

forthcoming)Laffont, J.J., Martimort, D.: The Theory of Incentives: The Principal–Agent Model. Princeton

University Press, Princeton (2001)Salanie, B.: The Economics of Contracts: A Primer, 2nd edn. MIT Press, Cambridge (2005)Sannikov, Y.: Agency problems, screening and increasing credit lines. Working paper, Princeton

University (2007)Sannikov, Y.: A continuous-time version of the principal-agent problem. Rev. Econ. Stud. 75, 957–

984 (2008)Sung, J.: Optimal contracts under adverse selection and moral hazard: a continuous-time approach.

Rev. Financ. Stud. 18, 1121–1173 (2005)