perturbed communication games with honest senders and naive
TRANSCRIPT
Perturbed Communication Games with Honest
Senders and Naive Receivers∗
Ying Chen†
Department of Economics, Arizona State University
January, 2010
Abstract
This paper studies communication games in which the sender is possibly honest
(tells the truth) and the receiver is possibly naive (follows messages as if truthful).
By characterizing message-monotone equilibria in the perturbed games, the paper
provides an explanation for several important aspects of strategic communication
not explained in the canonical model, including sender exaggeration, receiver skep-
ticism and the clustering of messages. The paper also derives a surprising result
that the strategic receiver may respond to more aggressive claims with more mod-
erate actions. In the limit as the probabilities of the non-strategic players approach
zero, (i) the limit equilibrium outcome corresponds to a most-informative equilib-
rium outcome of the limit (Crawford-Sobel) game; (ii) only the messages at the top
of the message space are sent with positive probability. The paper also establishes
equilibrium existence when the message space is finite and shows why existence
may fail when the message space is continuous.
Keywords: Communication, Honest Senders, Naive Receivers, Sender Exaggera-
tion, Receiver Skepticism, Clustering of Messages, Non-montone Receiver Reaction,
Finite Message Space
J.E.L. Classification: C72, D82, D83
∗First version: December 2004. An earlier version of this paper was circulated under the same
title. A portion of that earlier version has been taken out and incorporated into "Selecting Cheap-Talk
Equilibria" (Chen, Kartik and Sobel, 2008). This is a drastically revised version of the earlier paper. I
am indebted to my dissertation advisors David Pearce, Stephen Morris and Dino Geradi for their advice
and support. I thank Vincent Crawford, Ezra Friedman, Navin Kartik, Alvin Klevorick, Edward Schlee,
Joel Sobel and audience at various seminars and conferences for helpful comments and suggestions.†Department of Economics, Arizona State University, P.O. Box 873806, Tempe, AZ 85287-3806.
Email: [email protected].
1
1 Introduction
Cheap-talk models following the seminal work by Crawford and Sobel (1982) and Green
and Stokey (2007, earlier version 1981) have had tremendous success in explaining how in-
formation is strategically transmitted when agents have only partially aligned interests.1
Several important aspects of strategic communication, however, cannot be explained by
the canonical model. To illustrate, consider a security analyst making stock recommen-
dations to an investor. Analysts face a well-known conflict of interest — although they
want to provide reliable advice to attract customers, they are often under pressure to
bias their recommendations upwards, especially when affiliated with the underwriter.
Empirical studies2 have found that analysts’ recommendations are often inflated, many
investors adjust their trading responses downward and “buy” and “hold” recommenda-
tions are issued much more frequently than “sell” recommendations.
The canonical model cannot explain at least three aspects of this example: sender
exaggeration, receiver skepticism, and the relative frequency of different recommenda-
tions. It has long been understood, going back to Crawford and Sobel (1982), that the
canonical model does not predict how messages are used or interpreted in equilibrium be-
cause the players do not care about messages per se and therefore the literal meanings of
messages do not affect equilibrium behavior. Another element of the canonical Crawford
and Sobel (1982) (henceforth “CS”) model is that all players are fully strategic. A main
goal of this paper is to show, through a tractable model, that these aspects of strategic
communication that cannot be explained in the canonical model arise naturally when the
players are not all fully strategic.3
In the CS model, a sender privately observes the state of the world and then sends
a costless message to the uninformed receiver; upon receiving the message, the receiver
chooses an action that affects both players’ payoffs. My model departs from CS by assum-
ing that with positive probability, the sender is honest (truthfully reports his observation)
and the receiver is naive (follows whatever message is sent to her as if they were truth-
1Examples from different areas abound, including, for example, Matthews (1989) in political economy
and Morgan and Stocken (2003) in financial economics.2See, for example, Michaely and Womack (2005) and Malmendier and Shanthikumar (2007).3There is a long tradition, going back to at least Kreps and Wilson (1982) and Migrom and Roberts
(1982), of introducing behavioral types to explain phenomena that cannot be explained by fully strategic
players. That the players may be non-strategic is consistent with experimental and other empirical
evidence. Both experimental studies by Forsythe, Lundholm and Rietz’s (1999) and Cai and Wang
(2006) find that in cheap-talk games, some sender subjects displayed a tendency to reveal the true
state and some receiver subjects showed a certain amount of gullibility even when their opponents have
a clear incentive to lie. Malmendier and Shanthikumar’s (2007) empirical study finds that although
large traders take into consideration the incentives of stock analysts, small traders follow stock analysts’
recommendations literally.
2
ful).4 Otherwise the players act fully strategically to maximize their expected payoffs. (I
call these players “dishonest” senders and “sophisticated” receivers, or strategic players
in general.) Whether a player is strategic or not is private information.
A sender’s “honesty” and a receiver’s “naivety” are relative to the literal meanings of
the messages. In my model, there are a finite number of messages available. The message
has a commonly understood literal meaning “my observation of the state is in the
set ,” and the collection of ’s forms a partition of the state space. This captures the
characteristics of the message spaces typically used in many communication situations.
(For example, graduate admission offices often ask recommenders to describe a student
as belonging to one of the following categories: extraordinary (top 2%), exceptional (top
5%), outstanding (top 10%), superior (top 15%), above average (top 25%), average (top
50%), and below average (bottom 50%).) My model assumes that an honest sender sends
the message if and only if his observation of the state is in the set and when a naive
receiver receives the message , she responds as if the state is in the set .
Although the honest sender’s and the naive receiver’s behavior are fixed, the strategic
players’ use and interpretation of messages arise endogenously in equilibrium, and they
are the focus of my analysis. In section 3, I provide a characterization of the players’
strategies in the class of message-monotone equilibrium (i.e., equilibrium in which the
sender’s strategy is nondecreasing in the state) when the probabilities of the non-strategic
players are positive.
Specifically, suppose the dishonest sender has an upward bias. Then, in a message-
monotone equilibrium, the dishonest sender pools with the honest sender who has on
average a higher observation, i.e., the dishonest sender sends a message whose literal
meaning implies a higher observation of the state: sender exaggeration arises in equilib-
rium.
Of course, the strategic receiver cannot be fooled systematically. Since she does not
observe whether the sender is honest or dishonest, she discounts any message potentially
sent by the dishonest sender. For messages at the low end of the message space that are
sent only by the honest sender, they are “credible” and even the sophisticated receiver
believe them. However, every message higher than (0) (i.e., the equilibrium message
sent by the dishonest sender with the lowest observation) is sent by a positive measure
of the dishonest sender in equilibrium. Hence, for messages sufficiently high (higher than
(0)), we have receiver skepticism: the sophisticated receiver discounts the face value
of the claim and her response is strictly lower than the naive receiver’s.
4These types correspond to the L0 types ("truster" and "believer") in Cai and Wang (2006). They
find evidence of other behavior types of higher levels of sophistication (according to their classification)
that are anchored on the L0 types. For tractability, my model incorporates only the L0 types.
3
Another novel aspect of the perturbed game is the clustering of messages, i.e., only
those messages at the top of the message space are sent by a “large” measure of the
dishonest sender. This is illustrated in figure 1 and it provides an explanation for the
relative frequency of messages used: for example, professors’ recommendations of students
tend to concentrate in the highest categories, and “strong buy” and “buy” are issued much
more frequently by financial analysts than “sell” recommendations.
Perhaps the most surprising result is that the sophisticated receiver’s equilibrium ac-
tions are not necessarily increasing in the messages she receives, even if the sender follows
an increasing strategy. That is, the sophisticated receiver may react to a more aggressive
claim with a more moderate action. How can this happen? Roughly speaking, a higher
message may lead to a lower action if the receiver believes that with high probability, the
message is sent by the dishonest sender and therefore discounts the claim heavily.
As the probabilities of the non-strategic players go to zero, the perturbed game con-
verges to the canonical Crawford-Sobel game. Another goal of this paper is to investigate
what equilibrium outcomes survive in the limit and how messages are strategically used
and interpreted when the perturbation is vanishingly small. The main result, Proposition
2 in section 4.1, shows that as the probabilities of the non-strategic players go to zero,
the limit equilibrium outcome corresponds to a “most informative” equilibrium outcome
(i.e., one that has the maximal number of steps) in the limit game. This complements
the findings in Chen, Kartik and Sobel (2008). They introduce a selection criterion called
“no incentive to separate” (NITS) and shows that it uniquely selects the most informa-
tive equilibrium in the CS game under some regularity conditions. My analysis provides
a foundation of this selection criterion by showing that NITS must be satisfied in a limit
equilibrium outcome of the perturbed games. Additionally, Proposition 2 says more than
what equilibrium outcome survives in the limit — it also shows that the sender uses only
the messages at the top of the message space to convey information in the limit, resulting
in the starkest form of message clustering.
These results show that we have sharp predictions about how players use and interpret
messages strategically when they have a preexisting common language, and with (arbi-
trarily small) positive probability, some players are “literal minded.” The perturbation
approach taken in this paper complements various other ways to understand the role of
a preexisting common language in strategic communication. For example, Farrell (1993)
and Matthews, Okuno-Fujiwara and Postelwaite (1991) focus on how messages’ literal
meanings restrict players’ beliefs about unexpected announcements and how that affects
equilibrium selection. More recent work by Lo (2006) takes a somewhat different ap-
proach and looks at the implications of the restrictions a common language may have on
4
players’ strategies in cheap-talk games. These papers, including mine, that incorporate
a preexisting common language have rather distinct goals from another strand of liter-
ature (e.g., Blume, Kim and Sobel (1993) and Warneryd (1993)) that use evolutionary
arguments to explain the emergence of the meaning of words when a common language
does not exist.
One modeling difference between my paper and related papers is that my paper
assumes a finite message space whereas other papers typically assume a continuous mes-
sage space. Besides the advantage of fitting many communication situations well, a finite
message space also has interesting theoretical implications — while a message-monotone
equilibrium always exists when the message space is finite, as shown by Proposition 5,
existence may fail when the message space is continuous. In section 5, I use an example
to illustrate this possible failure of existence. Applying results in Manelli (1996), I also
show that adding “cheap-talk” extension restores existence in the game with a continuous
message space.
Other Related Literature
A number of recent and independent papers have introduced non-strategic players
and other perturbations into sender-receiver games. Let me briefly discuss the connection
between these papers and mine.
In Kartik, Ottaviani and Squintani (2006), the receiver is possibly naive, or the sender
has a cost of lying. Under the crucial assumption that the state and message spaces are
unbounded, they show that a fully revealing equilibrium exists in which the sender follows
a strictly monotone (hence invertible) strategy. Ottaviani and Squintani (2006) also allow
the possibility that the receiver is naive but assume that the state and message spaces
are bounded.5 They construct an equilibrium in which the sender’s strategy is fully
revealing in a low range of the state space and partitional in the top range. While this
is somewhat analogous to the characterization in my model,6 their construction does not
derive how messages are used. Instead, it assumes that the sender uses messages close to
one another for the partitions in the top range, so it does not explain message clustering.
Moreover, Ottaviani and Squintani (2006) find that all equilibrium outcomes in the CS
model are robust to their perturbation of the naive receivers and hence do not provide
5So there are two main modeling differences between my paper and theirs. One is that my paper
allows the possibility of nonstrategic types on both sides whereas their paper only allows the possibility
that the receiver is naive. The other is that the message space is finite in my paper whereas it is
continuous in theirs.6It is only somewhat analogous because in my model the strategic sender’s strategy is never fully
revealing, even at the low range of the state space — the strategic sender always pools with the honest
sender. Additionally, because the message space is finite, the strategic sender of different observations
also pool on the same message.
5
any argument for equilibrium selection in the CS model.7
Kartik (2008)8 introduces another kind of perturbation: the sender incurs a convex
cost of lying. His equilibrium characterization has similar properties to those in Otta-
viani and Squintani (2006): low types separate while high types pool and the receiver’s
action is increasing. While this explains sender exaggeration, it does not explain message
clustering.9 Moreover, Kartik’s results are derived under the refinement of monotonic D1
criterion that restricts beliefs off the equilibrium path whereas refinement is not needed
in my paper because the existence of the honest sender implies that Bayes’ rule always
applies. In all of the related papers discussed, the receiver’s strategy is increasing in the
message received. So the interesting phenomenon of more moderate reactions to more
aggressive claims never arises in these models.
Another paper that incorporates boundedly rational players into communication is
Crawford (2003). He looks at a binary, asymmetric, zero-sum game in which one player
can costlessly signal his intention of play. By introducing “mortal types,” Crawford
(2003) finds that misrepresentation of intentions can be successful sometimes.
My paper is also related to a number of studies of strategic communication in which
incomplete information about the players’ preferences plays an important role. Morgan
and Stocken (2003) analyze how uncertainty about stock analysts’ incentives affects stock
recommendations. Sobel (1985) models the dynamics of a sender’s “credibility” in a long
term relationship when he can be either a “friend” or an “enemy” to the receiver. Morris
(2001) explains that an advisor whose preferences are identical with the decision maker’s
may have a reputational incentive to lie and be “politically correct” because he does not
want to be perceived as being biased.
2 The Model
The benchmark is the classic model of strategic information transmission introduced by
Crawford and Sobel (1982). There are two players, a sender () and a receiver (). At
the beginning of the game, privately observes the state of the world, and sends a
7In addition to considering two-sided perturbations, my paper also explores what happens if the
perturbation is only on one side. In contrast to the one-sided perturbation of the naive receivers,
my paper finds that only the most informative equilibrium outcome in the CS model is robust to the
perturbation of the honest sender. This highlights the role of the discipline on the receiver’s belief in
ruling out less informative equilibria.8An earlier version of Kartik (2008) was circulated under the title “Information Transmission with
Almost-Cheap Talk.”9In Kartik’s characterization, types separate below a cutoff, and above this cutoff, types pool on
messages that have the lowest lying cost for the highest type and they may further separate by using
different messages that have the same lying cost.
6
costless message, , to . Upon receiving , chooses an action, , which affects both
players’ payoffs.
Suppose the von Neumann-Morgenstern utility functions for and are: ( ) =
− (− − )2and ( ) = − (− )
2. The “bias” of the sender, , parameterizes the
divergence of interest between the two parties. Without loss of generality, I assume that
≥ 0. So for any , the sender’s ideal point is always higher than the receiver’s. The
state space is Ω = [0 1] and the action space is = R. The players’ common prior on
has distribution , with distribution function (·) ∈ C1 and density (·). Assume0 () ∞ for ∈ [0 1]. If () = 1 for all ∈ [0 1], then we have the leading“uniform-quadratic” case of CS.
My model departs from CS by introducing two non-strategic types. On the sender’s
side, there is an “honest” type who reports truthfully. On the receiver’s side, there is a
“naive” type who follows the messages as if truthful. A player’s “honesty” or “naivety”
is relative to the literal meanings of messages, and they are described as follows. Fix a
partition of the state space Ω : I = 1 2 where each element of the partitionhas a positive measure and sup = inf+1 for = 1 −1. Let the message spacebe = 12 −1.10 One can interpret ∈ as having the literal
meaning: “My observation of is in .” The honest sender reports truthfully in the sense
that he reports if and only if ∈ .11 Since the naive receiver follows the messages
as if they were truthful, it follows that when receiving , she believes that ∈ .
Given the quadratic payoff function, she chooses an action equal to the expectation of
conditional on ∈ . To simplify notation, let = (| ∈ ) =
∈
(∈) for
= 1 . So the naive receiver chooses an action = when receiving .12
One can think of the non-strategic players as having different preferences from the
10There are other ways to model the message space. For example, one can define the message space
as a finite set = =1 where ∈ [0 1]. Assume that the honest sender sends the message in that is closest to her observation and the naive receiver believes that is sent by those types that
find closest. The equilibrium properties found in Proposition 1 still hold under this assumption.11This paper assumes that the honest sender observes . An alternative assumption is that the honest
sender has a noisy signal that tells him the element of the partition I. All the results go through under
this assumption.12The broad idea behind the notion of honest sender and naive receiver is the following: there is an
exogenous, commonly-known mapping between the sender’s type and messages. The honest sender is
faithful to this mapping in her report and the naive receiver responds to the messages as if the sender
was faithful to this mapping. The exact form of this exogenous mapping will of course affect the results,
but sometimes there may be particularly natural ways to define it. In the current model, it is natural
to define honesty as reporting if and only if ∈ .
Moreover, because there is a one-to-one correspondence between a message and the naive receiver’s
best response in my model, one can without loss of generality assume that is equal to the naive
receiver’s best response. This particular assumption was chosen only to simplify notation so that the
naive receiver’s response to is equal to . There are other ways to define messages without changing
the model conceptually. For example, we can just let = and the results will go through.
7
strategic ones. For example,13 assume that the honest sender’s payoff function is ( ) =
0 if ∈ and ( ) = 0 if ∈ and the naive receiver’s payoff function is
() = − (−)2. Then it is a dominant strategy for the honest sender to report
when ∈ and for the naive receiver to choose = when receiving . I will call
those players who have the same preference as in the original CS model the “dishonest
sender” and the “sophisticated receiver”, or the “strategic players” in general.
Denote by the probability that is honest and by the probability that is
naive. ’s type space is = Ω× where = .14 ’s type space is
= . Assume that the two elements of ’s type have independentdistributions and are also independent of ’s type distribution. Let denote the players’
common prior on .
The game described above is not a standard signaling game because both the sender
and the receiver have private information. But the only significant role the receiver’s pri-
vate information plays is changing the payoff function of the sender. So for the remainder
of the paper, I will study the following signaling game Γ[( ) I ], a re-
formulation of the game described above. In game Γ, player first observes his type
= ( ) from and then sends a signal from (as defined on page 7, is a
function of I and ). Player receives, infers ’s probable type and selects an action
from . The game ends and each player receives payoff ( = ), which is defined
as follows.
Let 1(·) be the indicator function such that 1() = 1 if = , 1() = 0
if = . Define ( ) = − ¡ (− − )2 − (1− ) (− − )
2¢1()
− () (1− 1()) and ( ) = −(− )2.
Everything in the game except is common knowledge. For notational convenience,
sometimes I will write Γ for the game with the message space . Another useful
piece of notation is ( () ) = − (− − )2− (1− ) ( ()− − )
2, the
dishonest sender’s interim payoff function.
3 Message-monotone Equilibria in Γ
Since the strategies of the honest sender and the naive receiver are given exogenously, I
only need to find the the strategic players’ equilibrium strategies. Let () : Ω→ be
the dishonest sender’s (pure) reporting strategy and () : → be the sophisticated
13Of course, there are many other payoff functions with which reporting truthfully and following the
messages as if truthful are dominant strategies. Different functional form will not change the results.
14For convenience, I will also refer to the sender of type ( ) as the type- dishonest sender.
8
receiver’s (pure) action strategy. (The strict concavity of the sophisticated receiver’s
utility function implies that she never plays a mixed strategy in equilibrium.) My analysis
will focus on the following class of equilibrium, the message-monotone equilibrium.
Definition 1 A message-monotone equilibrium in Γ is a sequential equilibrium15 in
which () is weakly increasing in
So a message-monotone equilibrium requires that the dishonest sender’s strategy be
pure and increasing in the state. Since both the sender and the receiver have monotone
preferences — they prefer higher actions for higher states, and that a fraction of receivers
follow the messages literally, this is a natural restriction to make. Note that no restriction
is made on the receiver’s strategy.
Let me first use an example to illustrate certain properties of a message-monotone
equilibrium and then provide a general characterization and interpretation.
3.1 An Example
Suppose is uniformly distributed on [0 1], = = 001 and = 005. Let = 10 and
= [01(− 1) 01) for = 1 9 and = [09 1]. So = 005 015 095.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0.05
0.15
0.25
0.35
0.45
0.55
0.65
0.75
0.85
0.95
m
the dishonest sender's strategy
Figure 1: ()
15Kreps and Wilson (1982) define sequential equilibrium only for finite games. Manelli (1996) adapts
their definition to infinite signaling games and I use Manelli’s definition in this paper. The definition
requires that the sender selects a best reponse for any type realization and that the receiver selects a
best response to any message on and off the equilibrium path. Since the existence of the honest sender
implies that every message is sent on the equilibrium path when 0, this definition of sequential
equilibrium coincides with Bayesian Nash Equilibrium with the requirement of interim optimality.
9
0.05 0.15 0.25 0.35 0.45 0.55 0.65 0.75 0.85 0.950
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
m
a
the sophisticated receiver's strategy
Figure 2: ()
Figure 1 shows that (0) = 1 = 005 and all the messages in are sent by a
positive measure of the dishonest sender. The dishonest sender exaggerates his claims:
he always sends a message higher than his observtion of (except when 095 since
the highest message = 095). Figure 2 shows that the sophisticated receiver discounts
the sender’s claims — she responds to a message with an action lower than the message.
Moreover, as figure 1 shows, equilibrium messages cluster at the top. In particular,
a threshold (= 065) exists such that a “large” measure of the dishonest sender pools
at each of the three messages higher than whereas lower messages are sent by only
“small” measures of the dishonest sender. Figure 2 shows that the sophisticated receiver’s
responses to the top three messages are far apart, reflecting the pooling of large measures
of the dishonest sender, but the responses to the lower messages are relatively flat.
3.2 Characterization of Message-monotone Equilibria in Γ
Fix 0 and ∈ (0 1). A message-monotone equilibrium always exists, as will be
shown in section 5. In this section, I characterize the properties of players’ equilibrium
strategies.
Since Ω is a continuum and is finite and has potential messages, the sender’s
increasing strategy () is a step function that can be represented by a vector x ∈[0 1]+1. The components of x are ( = 0 ) where 0 = 0 = 1 and ≤+1. Each component of x is a “jump point” of (·), i.e., the value of at which
the sender jumps from one message to the next higher one. Formally, x is defined as
follows: for ∈ such that (| () = ) 6= 0, let −1 = inf : () = and = sup : () = ; for ∈ such that (| () = ) = 0, if
10
(0) (1), then = inf : () ≥ , if ≤ (0), = 0, if ≥ (1),
= 1.
Suppose () and () are strategies in a message-monotone equilibrium. Let
x ∈ [0 1]+1 represent (). The existence of the honest sender implies that everymessage is sent with positive probability ex ante and therefore the receiver’s posterior
can always be pinned down by Bayes’ rule.16 Given the sophisticated receiver’s quadratic
payoff function, her best response to is equal to the conditional expectation of .
If 1 ≤ (0), then the sophisticated receiver believes that was sent by
the honest sender with probability one and hence () = . If ≥ (0), then the
sophisticated receiver believes that was sent by the honest sender with probability
( ( ∈ )) ( ( ∈ ) + (1− ) (| () = )) and by the dishonest sender with
probability ((1− ) (| () = )) ( ( ∈ ) + (1− ) (| () = )). Let
() =³R
:()= ´ ( (| () = )). (So () is the sophisticated re-
ceiver’s best response if she believes that was sent by only the dishonest sender.) The
best response of the sophisticated receiver is a weighted average of and (), i.e.,
() = ( ∈ ) + (1− ) (| () = )
()
( ∈ ) + (1− ) (| () = )
=R∈ + (1− )
R:()=
( ∈ ) + (1− ) (| () = )
Clearly, if −1 = , then (| () = ) = 0 and () = . Suppose −1 6= .
So ( : () = ) 0. Suppose 1. Let = min : ( () = )
0, i.e., is the lowest message higher than sent by the dishonest sender with positive
probability. Then at the “jump point” , the dishonest sender must be indifferent
between sending and .17 Without loss of generality, assume that the dishonest
sender of type sends the higher message, with probability one.
Let = min ∈ : ( : () = ) 0. Without loss of generality, assumethat if the dishonest sender of type 0 is indifferent between sending and0 ( ), then
he sends with probability 1. Then, for any ∈ such that | () = 6= ∅,16This implies that the question of how players form their beliefs off the equilibrium path, which is a
prominent issue in signaling and cheap-talk games (e.g., Banks and Sobel (1987), Cho and Kreps (1987),
Farrell (1993), Matthews, Okuno-Fujiwara and Postlewaite (1991)), does not arise here.17Suppose the indifference of type does not hold. Then WLOG, assume there exists an
0 such that ( () ) − ( () ) . Because ( () ) − ( () ) is continuous in , there exists a ∈ (0 − −1) such that for all ∈ (− ),
| ¡ ( () )− ( () )¢− ¡ ( () )− ( () )
¢ | . So
for the dishonest sender of type ∈ ( − ), sending is strictly better than sending , a
contradiction.
11
it must be that ( : () = ) 0. That is, if a message is sent by some type of
the dishonest sender, it must be sent by a positive measure of the dishonest sender.
Let = min ∈ : ≥ if and = if ≥ . Let
= max=1 ( ∈ ), i.e., is the maximum measure of the honest sender who
sends a particular message. So a smaller implies a finer message space. The following
proposition characterizes the properties of a message-monotone equilibrium when the
message space is sufficiently fine.
Proposition 1 (Properties of equilibrium strategies) Fix 0 and ∈ (0 1). Suppose () and () are strategies in a message-monotone equilibrium in Γ. There exists a
0 such that if ,18 the strategies satisfy the following properties:
1. (The sender’s on-path messages)
If (0) ≤ ≤ , then −1 6= ; also, (0) ≤ .
2. (Sender exaggeration and receiver skepticism)
If (0) ≤ ≤ , then () () and () + . Furthermore,
if (0) ≤ , then .
3. (Clustering of messages at the top)
There exists an ∈ such that for ∈
(a) if , then () ≥ −1 + and () ;
(b) if (0) ≤ ≤ , then () −1 + .
4. (Non-monotonicity of the receiver’s actions)
Suppose +1 ≤ . If (+1 − − )2 ( − − )
2, then (+1) ();
If (+1 − − )2 ( − − )
2, then (+1) ().
The proof is in the appendix.
Part (1) says that any message higher than (0) is sent by a positive measure of the
dishonest sender. It also says that the message sent by the lowest type of the dishonest
sender is no higher than , which implies that any message higher than is sent by the
dishonest sender in equilibrium.
Part (2) says that the dishonest sender inflates his claims: the dishonest sender who
sends has on average a lower observation than the honest sender who sends. In fact,
a stronger result holds: except for those types who necessarily find every message lower
18 may depend on and
12
than his observation (i.e., when ), the dishonest sender’s equilibrium message is
always higher than his type. As a result, the sophisticated receiver is skeptical of the
“face value” of any message above (0). Her best response to any above (0) is
lower than , but higher than if she knew that the sender was dishonest. Part (2) also
points out that the sophisticated receiver’s best response to never exceeds the ideal
point of the highest type of the dishonest sender who sends .
Part (3) says that one can view the dishonest sender’s strategy as having two parts.
One part is at the top of the message space ( ). For these messages, the dishonest
sender’s reporting strategy is reminicent of what happens in the CS model: roughly, a
“large” measure (in the sense that () ≥ −1+ ) of the dishonest sender pools on a
particular message and the sophisticated receiver’s best response to is higher than the
ideal point of the lowest type of the dishonest sender who sends .19 As to the messages
below , only a “small” measure pools on a particular message. This “clustering” result
implies that one should expect the messages at the higher end of the message space being
used more frequently than those at the lower end.
Perhaps the most surprising result is part (4), which points out that the sophisticated
receiver’s best response may be non-monotone in the message she receives.20 Note that
when the dishonest sender of type makes a strategic choice of what message to send,
he weighs the strategic response of the sophisticated receiver against the fixed response of
the naive receiver. If (+1 − − )2 ( − − )
2, then sending the higher message
+1 induces a worse response from the naive receiver. To make the dishonest sender
indifferent, the higher message+1 must induce a better response from the sophisticated
receiver. Hence (+1) (). If (+1 − − )2 ( − − )
2, however, sending
the higher message +1 induces a better response from the naive receiver. To make the
dishonest sender indifferent, the higher message +1 must induce a worse response from
the sophisticated receiver. Hence (+1) ().21
It may seem somewhat counter-intuitive that the sophisticated receiver’s best response
might be decreasing when both the senders use increasing reporting strategies. But recall
that the sophisticated receiver’s best response to is a weighted average of and
(). While it is true that conditional on the sender being honest or dishonest, a
19Consider a CS equilibrium with more than one steps. Except in the first step, the receiver’s action
in any step must be higher than the ideal point of the lowest type of the sender in that step. In the first
step, however, depending on the parameters, the receiver’s action may or may not be higher than the
ideal point of the lowest type.20Although it is possible that () is non-monotone, it is not necessarily so. For instance, () is
weakly increasing in the example in section 3.1. But one can easily find examples in which () is
non-monotone.21Of course, a worse response could also be a response that exceeds the sender’s ideal point and
therefore is too high, but this is ruled out in equilibrium for .
13
higher implies a higher , a higher message may also imply a higher probability that
the sender is dishonest — the message is “too good to be true.” Overall, the expectation of
may be lower when is higher and therefore the sophisticated receiver can rationally
respond to more aggressive claims with more moderate actions.
The equilibrium characterization in Proposition 1 is consistent with empirical find-
ings. For example, Malmendier and Shanthikumar (2007) study stock analysts’ rec-
ommendations to investors both large and small. They find that while large investors
account for the incentives of the analysts to bias their recommendations upward, small
investors seem to take the recommendations literally. Their analysis of the NYSE Trades
and Quotations database shows that although all categories of recommendation (“strong
sell”, “sell”, “hold”, “buy”, “strong buy”) are issued by analysts, there are very few
“strong sell” or “sell” recommendations (458%). So over 95% of all recommendations
are clustered at the three positive or neutral categories. Moreover, the recommendations
influence the trading behavior of both large and small investors (this indicate that there
is substantial common interest between analysts and investors). But large investors are
skeptical of the face value of the recommendations: in general, they shift down their
reaction to positive recommendations (e.g., they hold in response to buy recommenda-
tions); they react to negative or neutral recommendations by selling, but the differential
reaction to hold, sell and strong sell is insignificant.
4 Limit Equilibrium Outcomes
In this section, I explore the properties of the limit equilibrium outcomes as the pertur-
bation of the non-strategic players vanishes. Section 4.1 analyzes the limit equilibrium
outcomes when both and approach 0. Section 4.2 provides a discussion of the prop-
erties of limit equilibrium outcomes as either or approaches 0. Section 4.3 addresses
the question of whether one-sided perturbation is sufficient to select the most informative
equilibrium in the limit game.
4.1 Limit Equilibrium Outcomes as Both and Go to 0
Fix everything in game Γ except for the parameters and . Consider a sequence of
gamesnΓ
owith → 0 and → 0. The limit game Γ00 is a Crawford-Sobel game.
22
(I also use Γ to refer to Γ00)
22I use Manelli’s (1996) Definition 4 for the convergence of games. In my model, the requirement is
that
( )→ 00 ( ) and
( )→
00 ( ).
14
Denote by ( () ()) a message-monotone equilibrium strategy profile in the
game Γ . Suppose the message space is sufficiently fine that the properties in
Proposition 1 hold along the sequence.23 Together with the prior on the players’ types,
( () ()) generates an equilibrium outcome, a distribution on × ×.
Let denote a message-monotone equilibrium outcome of the game Γ
. As → 0
and → 0, converges (in a subsequence) to a limit distribution . The following
upper hemi-continuity result holds.
Lemma 1 The limit distribution is a sequential equilibrium outcome of the limit game
Γ00.
This result follows from Theorem 1 and Corollary 1 in Manelli (1996). His Theorem 1
implies that if can be generated by a strategy () of the sender, a strategy () of the
receiver, and () is continuous, then the result holds. To show that these conditions
are met, one can use the same argument as Manelli’s for his Corollary 1. First, it is
always possible to find a strategy (·) to generate × 24 the sender’s share of the
limit distribution. Second, because is finite, the receiver’s strategy is continuous
and () converges uniformly to a strategy (). The uniform convergence implies
that the limit distribution is generated by (). Hence is a sequential equilibrium
outcome of the limit game Γ00.
Although the limit distribution is an equilibrium outcome of the limit game, the
converse is not true, i.e., not all equilibrium outcomes in Γ are limit distributions of
equilibrium outcomes in a sequence of converging games. As is well known, the Crawford-
Sobel game Γ has multiple equilibria. In particular, fix any sender bias , there is an
upper bound, , on the size of an equilibrium (i.e., the number of subintervals in an
equilibrium partition). Equilibria of each size from 1 through exist. Crawford and
Sobel (1982) show that under a regularity condition, the condition () 25 the equilibrium
that has the highest number of steps ex ante Pareto dominates the other equilibria with
fewer steps and is therefore called the “most informative” equilibrium. In Proposition 2
below, we will see that (i) as the probabilities of the non-strategic types vanish, the limit
equilibrium outcome corresponds to a most informative equilibrium outcome in the limit
CS game, and (ii) in this limit equilibrium outcome, only the top messages are used.
23As shown in the proof of Proposition 1, is bounded away from 0 as and approach zero. So
such exists.24Note that although () may not converge, the vector that represents () does converge
(in a subsequence). Suppose converges to a vector . Then represents the strategy (·) thatgenerates × .25See Crawford and Sobel (1982) page 1444 for the definition of (). For example, the uniform-
quadratic case satisfies ().
15
Say an equilibrium outcome in Γ is a limit distribution of the equilibrium outcomes
of the perturbed games if there exist sequences and with → 0 and → 0
such that the corresponding sequence of message-monotone equilibrium outcomes converges to . Recall that = max=1 ( ∈ ), the maximum measure of the
honest sender who sends a particular message.
Proposition 2 (Limit equilibrium outcome as and → 0) Suppose is a limit dis-
tribution of the equilibrium outcomes of the perturbed games. There exists a 0 such
that if , then (i) is a most informative equilibrium outcome in Γ and (ii) only
the top messages in are sent with positive probability in .
The proof is in the appendix. The following is a sketch.
Sketch of proof. First, I show that is a most informative equilibrium outcome,
using a result (their Proposition 3) in Chen, Kartik and Sobel (2008). They show that
in a CS game, under (), only the most informative equilibrium outcome satisfies the
following “no incentive to separate” (NITS) condition: the type-0 sender’s equilibrium
payoff is at least as high as the payoff he gets if the receiver knew his type and re-
sponded optimally. In my model, NITS requires that type 0’s equilibrium payoff is at
least − (0− 0− )2. Proposition 1 implies that if the message space is sufficiently fine,
NITS is satisfied along the sequence . This is easily seen because 1 ∈ (0 ) and0 (1) ≤ 1 and hence the type-0 sender can guarantee a payoff of at least −2by sending 1. Because (·) converges uniformly, NITS must be satisfied in the limitequilibrium outcome . Hence is a most informative equilibrium outcome in the limit
CS game.
Then, I show that if a message is sent with positive probability in , then any message
above it must be sent with positive probability as well. Hence, only the top messages
are sent with positive probability in the limit equilibrium outcome.
4.2 Limit Equilibrium Outcomes as Either or Goes to 0
It is also useful to understand what happens if the probability of only one kind of the non-
strategic players vanishes. Below I discuss properties of the limit equilibrium outcomes
as either or goes to 0.
(i) Fix everything in game Γ (including 0) except . Consider a sequence of
games©Γ
ªwith → 0. That is, in the limit, the receiver is sophisticated with
probability 1, but the probability that the sender is honest is positive. In the limit
equilibrium, the sender’s strategy preserves the properties in Proposition 1. In particular,
any message between (0) and is stil sent with positive probability and the messages
16
cluster at the top of the message space (i.e., above the threshold ). However, in the
absence of the naive receiver, the sender’s indifference condition implies that the limit
equilibrium strategy () is constant below . So the receiver’s limit action strategy
differs from the characterization in Proposition 1.
(ii) Fix everything in game Γ (including 0) except . Consider a sequence
of gamesnΓ
owith → 0. That is, in the limit, the sender is dishonest with
probability 1, but the probability that the receiver is naive is positive. In the ab-
sence of the honest sender, the messages below are no longer sent by the dishon-
est sender and only those messages above are sent with positive probability in the
limit equilibrium. Because the probability that the receiver is naive is positive even
in the limit, the sophisticated receiver’s limit equilibrium strategy is not constant be-
low . In particular, for , the limit equilibrium strategy () satisfies
− ( − 0− )2−(1− ) ( ()− 0− )
2= − (− 0− )
2−(1− ) ( ()− 0− )2.
That is, the receiver’s response makes the type-0 sender indifferent among all the mes-
sages between and .
4.3 One-sided Perturbation
Proposition 2 shows that only the most informative equilibrium outcome in the CS game
is robust to the two-sided perturbation of non-strategic players, providing a rationale for
selecting the most informative equilibrium in the CS game. Can we still select the most
informative equilibrium outcome if the perturbation is only on one side, i.e., either the
receiver is possibly naive or the sender is possibly honest, but not both? Below, I address
this question by analyzing the limit equilibrium outcome when perturbation is one-sided.
4.3.1 Perturbation only on the receiver’s side ( = 0 and 0)
Let Γ0 denote a game in which the probability that the sender is honest is 0 and the
probability that the receiver is naive is . Fix the parameters in Γ0.
Proposition 3 Any equilibrium outcome in the limit CS game Γ00 is also an equilibrium
outcome of the perturbed game Γ0.
This result follows from Proposition 2 in Ottaviani and Squintani (2006). I refer the
reader to their paper for the proof, but discuss the main idea here. Let (0 1 )
denote the equilibrium partition of size in the CS game Γ00. For the game Γ0 with
0, one can construct the following equilibrium: for = 0 − 1 and the interval(+1), the sender sends the message ∈ that is closest to the strategic receiver’s
17
best response (+1) = (| ∈ (+1)).26 Call this message . For an off-the-
equilibrium-path message ∈ (+1) that is lower (or higher) than , the belief
assigned to the sophisticated receiver leads to an action that is higher (or lower) than
(+1). Because of the strict concavity of the sender’s payoff in the receiver’s action,
this construction prevents the sender from deviating. It also generates the equilibrium
partition (0 1 ).
So all equilibrium outcomes in the limit CS game are robust to the perturbation
only on the receiver’s side. Note that the construction of the equilibrium described in
the preceding paragraph involves implausible beliefs off the equilibrium path that would
have been ruled out if the sender were possibly honest.
4.3.2 Perturbation only on the sender’s side ( = 0 and 0)
In this case, one can show that only the most informative equilibrium outcome in the
limit CS game is robust to the perturbation only on the sender’s side. That is, we have
the following result.
Proposition 4 There exists a 0 such that if , then only the most informative
equilibrium outcome in the limit CS game Γ00 is a limit distribution of message-monotone
equilibrium outcomes in a converging sequence of perturbed games Γ0 with → 0.
The proof is in the appendix. The crucial observation is that in any message-monotone
equilibrium in a perturbed game Γ0 with 0, the “no incentive to separate” condition
is satisfied. This is due to the discipline imposed on the receiver’s beliefs by the existence
of the honest sender.
Although perturbation on the sender’s side is sufficient to select the most informative
equilibrium outcome in the CS game, it is worth noting that certain equilibrium properties
away from the limit as characterized in Proposition 1 depend on being positive. In
particular, the interesting non-monotonicity of the receiver’s actions arises only when
0.
5 Existence
One important difference between mymodel and related models is that mymodel assumes
a finite message space whereas others typically assume a continuous message space. In
addition to being a more realistic assumption in many communication situations, a finite
26In Ottaviani and Squintani (2006), the message space is continuous and hence the sender sends a
message that is equal to the sophisticated receiver’s optimal response, but the argument is the same.
18
message space has another advantage: it guarantees the existence of a message-monotone
equilibrium, as Proposition 5 below shows, whereas a message-monotone equilibrium may
fail to exist when the message space is continuous.
Proposition 5 (Equilibrium existence with a finite message space) Fix 0, and
∈ (0 1), ∈ (0 1). A message-monotone equilibrium exists in Γ.
The proof is in the appendix. The following is a sketch.
Sketch of Proof. The proof has two main steps. First, the sophisticated receiver
is restricted to choose her strategy () from the set in which + (1− ) () is
weakly increasing.27 Under this restriction, the dishonest sender’s best response is weakly
increasing because of the single crossing property. Application of Kakutani’s fixed point
theorem shows that the “restricted” best response correspondence has a fixed point.
Second, I show that for a fixed point found in the first step, the sophisticated receiver’s
strategy that corresponds to the fixed point is still a best response even without the
constraint that + (1− ) () is weakly increasing, thus establishing the existence
of message-monotone equilibrium.
When the message space is continuous, however, a message-monotone equilibrium
may fail to exist. The example below illustrates this.
Example: Suppose () = 1, ∀ ∈ [0 1], = 005 and = 1 2 −1
.
For simplicity, let’s look at the limit message-monotone equilibrium outcome as
and go to 0. The most informative CS equilibrium partition consists of 3 subintervals:
[0 215) [ 2
15 715) [ 7
15 1]. Proposition 2 in section 4 implies that in , only the top three
messages are sent with positive probability. In particular, in , ∈ [0 215) =
−3 = 1
15 = 2
15, ∈ [ 2
15 715) = −2
= 3
10 = 1
3and ∈ [ 7
15 1] =
−1 = 11
15 = 8
15.
As converges to [0 1],28 the equilibrium outcomes will converge weakly to the
distribution where ∈ [0 215) = 1 = 1
15 = 2
15, ∈ [ 2
15 715) = 1 = 3
10 =
13and ∈ [ 7
15 1) = 1 = 11
15 = 8
15. But no strategy pair in the limit game Γ
(same as Γ except the message space in Γ is [0 1]) can generate this distribution. To
generate this distribution, the sender must send = 1 for all ∈ [0 1] and the receivermust respond to = 1 with = 1
15when ∈ [0 2
15), with = 3
10when ∈ [ 2
15 715) and
with = 1115when ∈ [ 7
15 1], which is obviously impossible.
In any finite game Γ, the top three messages in the message space are used to convey
different information. But the infinite game Γ has no top three messages and equilibrium
27The restriction is that (+ (1− ) ()) is increasing, not that () is increasing. Indeed, as
will be shown in section 3.2, () is not necessarily increasing in a message-monotone equilibrium.28Convergence is in the Hausdorff metric.
19
breaks down in the limit because the sender is unable to convey his private information
effectively to the receiver.
The nonexistence problem in infinite signaling games has been investigated in the
literature. Manelli (1996) provides an ingenious way to solve the problem. The idea
is to extend the sender’s strategy space and allow him to make a costless, nonbinding
suggestion of action to the receiver, in addition to sending a message . Manelli (1996)
terms this new game the canonical cheap-talk extension of the original game. His Theo-
rem 2 shows that the limit distribution of sequential equilibrium outcomes in a sequence
of converging games is a sequential equilibrium outcome of the canonical cheap-talk ex-
tension of the limit game. Since my Proposition 5 has shown that a message-monotone
equilibrium exists whenever the message space is finite, one can apply Manelli’s Theorem
2 to obtain the following result.
Result: A message-monotone equilibrium exists in the canonical cheap-talk extension
of Γ, the game with a continuous message space [0 1].
In particular, going back to the example above, one can see that the canonical cheap-
talk extension of Γ has a message-monotone equilibrium in which the dishonest sender
sends = 1 and suggests = 115when ∈ [0 2
15), = 3
10when ∈ [ 2
15 715) and = 11
15
when ∈ [ 715 1], and the sophisticated receiver follows the suggestions.
6 Conclusion
In this paper, I enrich the standard communication game between parties of partial
common interest by incorporating non-strategic players — the honest sender and the
naive receiver — into the model.
Introducing the possibility that the players may be non-strategic fundamentally changes
the way the game is played. The communication is no longer “cheap talk” because the
naive receiver responds to messages in a literal-minded way. The existence of the hon-
est sender implies that every message is sent with positive probability ex ante and it
disciplines the receiver’s posterior belief.
One contribution of the paper is that it provides an explanation for several important
and interesting aspects of strategic communication that the canonical model fails to ex-
plain. For example, empirical studies on stock recommendations have found that analysts
typically exaggerate their claims upwards, many investors discount the analysts’ “buy”
recommendations, and positive and neutral categories are issued much more frequently
than negative ones. While these aspects are absent in the predictions of the canonical
model, my paper shows that they arise naturally in the perturbed model. A surprising
20
result also arises in the perturbed model: the strategic receiver’s equilibrium strategy
may be non-monotone, that is, she may respond to more aggressive claims with more
moderate actions.
Another contribution of the paper is that it generates sharper predictions than the
canonical model does even in the limit. In particular, as the probabilities of the non-
strategic players go to zero, the equilibrium outcomes in the perturbed games converge
to a most informative equilibrium outcome in the limit game where the sender partially
separates by using the messages at the top of the message space.
Appendix
Proof of Proposition 1:
Suppose −1 6= , 1 and = min : ( () = ) 0. Let4 ( ) = ( () )− ( () ) = − ( −)
( + − 2 − 2)−(1− ) ( ()− ()) ( () + ()− 2 − 2). Monotonic-ity of () requires that 4 ( ) 0. Hence
4( )
= 2 ( −) +
2 (1− ) ( ()− ()) 0. Type ’s indifference requires that 4 ( ) = 0.
Combining the two conditions, we have ( ()+ ()−−) ( () + ()− 2 − 2) ≥0. Call this condition (∗). Condition (∗) implies that if ()+ ()−2−2 (≥) ≤ 0,then () + () (≥) ≤ +.
Recall that = min ∈ : ≥ if and = if ≥ .
Lemma 2 For all ≥ , −1 6= .
Proof. By contradiction.
Suppose and there exists an ≥ such that −1 = . Then () =
≥ ≥ . So the dishonest sender of type 0 = (−) gets his highest possible
payoff by sending = and therefore he has a strict incentive to deviate to sending
, a contradiction.
Suppose = and −1 = . Then () = ≤ . Since (0) and
( (0)) , the dishonest sender of type 0 has a strict incentive to deviate to , a
contradiction.
Next, I show that in a message-monotone equilibrium, the sophisticated receiver’s
best response to cannot be higher than the ideal point of the dishonest sender of type
— the supremum of the types of the dishonest sender who send .
21
Lemma 3 Suppose −1 6= . Then () +
Proof. Suppose = 1. Then obviously () + .
Suppose 1. Then and there must exist an such that is
indifferent between sending and . I show by contradiction that () + .
Suppose () ≥ + . Since () is a weighted average of and (), it follows
that () ≤ max (). Since () , it follows that () ≤ max .
So () ≥ + implies that () ≥ + .
The indifference condition of type implies that ( −) ( + − 2 − 2)+ (1− ) ( ()− ()) ( () + ()− 2 − 2) = 0. Since ≥ + and
() ≥ + , it must be true that () () and () + () 2 + 2.
But since (), it follows that (), contradicting condition (∗). Hence, () + .
The next few lemmas are derived under the condition that the message space is
sufficiently fine. Recall that = max (). Note that if is small, then the distancebetween adjacent messages is also small. In particular, let = min () : ∈ [0 1]) 0. Then () =
R∈ () ≥ (sup ()− inf ()). Since +1 − sup (+1) −
inf () = (sup (+1)− inf (+1)) + (sup ()− inf ()), we have +1 − (+1)
+
()
≤ 2
. So if ≤
2, then (+1 −) ≤ for all = 1 − 1. Also, if ≤ ,
then 1 and 1− .
It is possible that the sophisticated receiver’s best response to is higher than the
ideal point of the type −1 (the infimum of the types of the dishonest sender who send
). Lemmas 4 and 5 show that it can happen only at the high end of the message space.
Lemma 4 Suppsose −1 6= and () ≥ −1 + . Then there exists a 1 0 such
that if 1, then ().
Proof. First, note that () ≥ −1 + implies that () ≥ −1 + . So
− −1 .
If ≤ (), then obviously ().
Suppose ().
Note that () − () = (∈)+(1−) (|()=)
()
(∈)+(1−) (|()=)− () =
(∈) (∈)+(1−) (|()=)
¡ − ()
¢.
Since (| () = ) =R −1
() (−−1) and − () 1−,it follows that ()− ()
(∈) (∈)+(1−) (1− ).
Note that − () =R −1
( − )()
()− (−1)≥ R −1
( − ) = 12( − −1)
2
122 0.
22
Hence, − () =¡ − ()
¢−¡ ()− ()¢ 1
22−
³ (∈)
(∈)+(1−) (1− )´.
Let 1 satisfy the condition122−
³1
1+(1−) (1− )´= 0. (If 1
22−
³
+(1−) (1− )´
0, ∀, then let 1 = ∞. Note that as → 0, 1 → ∞) Since 122 0 and
122 −
³
+(1−) (1− )´is decreasing in , it follows that 1 0.
If ( ∈ ) 1, then122 −
³ (∈)
(∈)+(1−) (1− )´ 0. So if 1,
29 then
− () 0.
Lemma 5 Suppose −1 6= and () ≥ −1 + . Let = min
: ( () = ) 0. Then there exists a 2 0 such that if 2 , then
() −1 + and ().
Proof. Note that = −1.
Suppose 1. Then Lemma 4 implies that ().
The indifference condition of the dishonest sender of type implies ( − − )2+
(1− ) ( ()− − )2= ( − − )
2+ (1− ) ( ()− − )
2. Consider the
following cases.
(i) Suppose ( − − )2 ≥ ( − − )
2. Then the indifference of type requires
that ( ()− − )2 ≥ ( ()− − )
2 2.
(ii) Suppose ( − − )2 ( − − )
2. Then there are two possible cases:
either (a) ≥ and by Lemma 2 = +1, or (b) , = and
− 2 ( − ) 2 ( −−1).
The indifference condition is
( ()− − )2= ( ()− − )
2+
1−
¡( − − )
2 − ( − − )2¢
= (+ − ())2+
1− (( −) (2 + 2− −))
As shown in the proof of Lemma 4, − () 122 −
³ ()
()+(1−) (1− )´.
Under case (iia), − = +1− (+1)+ ()
and under case (iib), −
2 ( −−1) 2( ()+ (+1))
. Also, since ( − − )
2 ( − − )
2, it follows
29The bound 1 is not tight. The other bounds found in the following lemmas are not tight either, but
since the goal is to establish properties of the equilibrium strategies if the message space is suffciently
fine, the bounds suffice.
23
that 0 2 + 2− − 2− 2. So under case (iia),
( ()− − )2
µ+
1
22 −
µ ()
() + (1− ) (1− )
¶¶2+
1−
µ (+1) + ()
¶(2− 2) .
Under case (iib),
( ()− − )2
µ+
1
22 −
µ ()
() + (1− ) (1− )
¶¶2+
1−
µ2 ( () + (−1))
¶(2− 2) .
Let 01 satisfy³+ 1
22 −
³01
01+(1−) (1− )´´2
+ 1−
³2×201
´(2− 2) = 2. (Note
that as → 0, 01 →∞.) Since¡+ 1
22¢2
2 and³+ 1
22 −
³
+(1−) (1− )´´2
+
1−
³2
´(2− 2) is decreasing in , it follows that 01 0. If 01, then ( ()− − )
2
2.
So in all three cases (i), (iia), (iib), ( ()− − )2 2.
To satisfy ( ()− − )2 2 either () or () + 2.
Suppose () . Since () is a weighted average of and () where
() , it follows that () . Since + and
() + , the indifference of type implies that () (). So we have
() () + and () , () , contradicting condition (∗).Hence () + 2 = −1 + 2. Below, I show that () −1 + for
sufficiently small. Consider the following two cases:
(1) Suppose ≤ (). Then () ≥ () −1 + 2.
(2) Suppose (). Then () () By Lemma 3, + ().
Since () −1 + 2, we have + and (| () = ) ≥ .
Since () = (∈)+(1−) (|()=)
()
(∈)+(1−) (|()=)≤ (∈)+(1−)()
(∈)+(1−) and
1 we have −1 + 2 (∈)+(1−)()
(∈)+(1−) .
Let 001 (−1) satisfy001+(1−)(−1+)
001+(1−) = −1 + 2. (Note that as → 0, 001 (−1)→∞.) So 001 (−1) 0 and if ( ∈ ) 001 (−1), then
() −1+. As a function
of −1, 001 (−1) is increasing. Note that −1 −1 + ≥ . So 001 (−1) ≥ 001 ().
Let 2 = min1 01 001 () 0. If 2, then in both cases (1) and (2), ()
−1 + and by Lemma 4, ()
24
Let = max ∈ : −1 6= , () −1 + . Then, if and
−1 6= , then () ≥ −1 + . Of course, if ≥ 1, then = = . Suppose
1. Note that the number of messages that can possibly have () ≥ −1 +
is finite. Let () be the number of messages that can have () ≥ −1 + . If is
sufficiently small, then there will be more than () messages above . In particular,
recall that (+1 −) 2∀. Since 1 −
and +
, if
1− −+
2
() (i.e., if (1−)
2(()+1)), then there are at least () messages above . Let
= min2 (1−)2(()+1)
2 0. We have
Lemma 6 If , then ≥ and for all , −1 6= , () ≥ −1 +
(), () (); moreover, if , then .
Proof. Lemma 2 says that if ≥ , then −1 6= . So if , then ≥ .
Lemmas 4 and 5 immediately imply the following: if , then for all ,
() ≥ −1 + and ().
To show that (), note that if then sup[] inf[] + . Also,
() ≥ −1 + implies that ≥ −1 + . Since sup[] = , it follows that if
, then inf[] −1 and hence (). Also, if
2, then−1 −.
So −1 ()− implies that −1 −1. Induction shows that () ≥ −1+
implies (). Since () is a weighted average of and (), it follows
that () (). Induction also shows that if , then
The proof of Lemma 5 shows that a similar result can be established when − ()
0. This result will be useful in proving Proposition 2, so I state it here.
Corollary 1 Suppose −1 6= and − () 0. Let = min
: ( () = ) 0. There exists a 2() 0 such that if 2 () , then
() −1 + and ().
Proof. Define 1 in the same way as 01 was defined in the proof of Lemma 5:
1 satisfies (+ )2+
1−
³2×21
´(2− 2) = 2. (Note that 1 is a function of and
1 () 0, ∀ 0. Fix c, as → 0, 1 → ∞) Let 2() = min1 1 () 001 () 0.
The proof used in Lemma 5 goes through.
Lemmas 5 and 6 establish properties for messages above . For messages below ,
the following result holds.
Lemma 7 Suppose , ≤ with , −1 6= , −1 6= and = −1.
Then () () .
25
Proof. Since ≤ , () −1 + .
If () ≥ −1 + , then () (). Since () is a weighted average of
() and , it follows that () () .
Now suppose () −1 + .
Lemma 3 implies that () + .
Since the dishonest sender of type is indifferent between sending and , con-
dition (∗) implies that () + () ≤ +.
Suppose () ≥ . Since () + , we have + . Type-
’s indifference condition implies that () (). But this implies that () +
() + , a contradiction. Hence () . Since () is a weighted
average of and (), it follows that () () .
Next, I establish a number of results on (0) and ( (0)).
Claim 1 (0) ≤ .
Proof. If = , then clearly (0) ≤ .
Suppose and suppose (0) . Then ¡¢= ≥ and the dishonest
sender with type 0 =¡ −
¢gets his highest possible payoff by sending = . So
the dihonest sender of type 0 has a strict incentive to deviate, a contradiction.
Claim 2 If , then ( (0)) ( (0)) (0).
Proof. Suppose (0) = ≥ . Lemma 6 implies that ( (0)) 0 + . So it
follows that ( (0)) ( (0)) (0).
Suppose (0) . Let 0 = sup (| () = (0)) and (0) = (0). So
0 = −1. Lemma 2 implies that ≤ . Lemma 7 implies that () ()
. Note that the dishonest sender of type −1 is indifferent between sending (0) and
. Consider the following three cases.
(i) Suppose ( − −1 − )2 ≤ ( (0)− −1 − )
2and () ≤ −1 + . Since
( (0)) −1+ by Lemma 3, type −1’s indifference implies that ( (0)) ≥ ().
Note that ( (0)) () and () () by Lemma 7. Since ( (0)) is a
weighted average of ( (0)) and (0), it follows that (0) ( (0)) ( (0)).
(ii) Suppose ( − −1 − )2 ≤ ( (0)− −1 − )
2and () −1 + . Since
( (0)) −1+ , type −1’s indifference implies that ()− −1− ≥ −1+ − ( (0)). Since () ≤ , it follows that − −1 − −1 + − ( (0)),
which implies that ( (0)) 2−1 + 2 −. Since −−1 when
2 it
follows that 2 − 0. Hence ( (0)) 2−1 ( (0)) and therefore (0)
( (0)) ( (0)).
26
(iii) Suppose ( − −1 − )2 ( (0)− −1 − )
2. Since (0), (0)
and hence (0) − −1 − 0, it follows that −1 + . Since ≤ =
min ∈ : ≥ , it follows that = and −1 − . Since −1 + − (0) − −1 − , it follows that 2−1 + 2− (0). Since 2− 0 when
2 we have −1 (0). Since ( (0)) −1, we have ( (0)) (0). So
(0) ( (0)) ( (0)).
Claim 3 For all ≥ (0), −1 6= .
Proof. Lemma 2 says that if ≥ , then −1 6= .
Suppose (0) ≤ . Since a message sent by some type of the dishonest
sender in equilibrium is sent by a positive measure of the dishonest sender, if = (0),
then 6= −1. Now suppose there exists an ∈¡ (0)
¢such that = −1.
Then () = ∈ ( (0) ). Claim 2 says that ( (0)) (0). So, the dishonest
sender of type 0 has a strict incentive to deviate from sending (0) to sending , a
contradiction.
Lemma 8 Suppose . If (0) ≤ ≤ , then −1 6= , () () ;
moreover, if , then .
Proof. Lemma 7 and Claims 1, 2 and 3 imply that that for all (0) ≤ ≤ ,
it holds that −1 6= , () () . So I only need to show that if
.
Suppose not. Then ≥ . Since , we have+1 ≤ . Since+1 +
when
2, it follows that + +1 . Since +1 + , it follows
from Lemma 6 that +1 ≤ . So (+1) + and (+1) + . Since
+1 + and () + by Lemma 3, type ’s indifference between
sending and+1 implies that (+1) (). Since (+1) ≥ (),
it follows that (+1) (+1), contradicting Lemma 7. Hence if .
Lemma 9 Suppose and (0) ≤ +1 ≤ . If (+1 − − )2 ( − − )
2,
then (+1) (); if (+1 − − )2 ( − − )
2, then (+1) ().
Proof. Claim 3 implies that the dishonest sender of type is indifferent between
sending and +1.
Suppose (+1 − − )2
( − − )2. Then ( (+1)− − )
2
( ()− − )2. So | ( (+1)− − ) | | ()− − |. Since () + by
Lemma 3, it follows that (+1) ().
27
Suppose (+1 − − )2
( − − )2. Then ( (+1)− − )
2
( ()− − )2. So | ( (+1)− − ) | | ()− − |. Since () + , it
follows that either (i) (+1) () + or (ii) (+1) + () with
(+1) − − + − (). Suppose (+1) + (). Lemma
8 implies that +1 (+1) and () Hence +1 − − 0 and
|+1 − − | | + −|, but this contradicts (+1 − − )2 ( − − )
2.
Hence it must be the case that (+1) () + .
To summarize, Claim 1 and Claim 3 imply part (1) of Proposition 1. For part (2) of
Proposition 1, Lemma 3 and Claim 3 imply that () + for all (0) ≤ ≤ ;
Lemma 6 and Lemma 8 imply that if (0) ≤ ≤ , then () () ,
() + and if (0) ≤ , then . Part (3) of Proposition 1 comes
from Lemma 6. Part (4) of Proposition 1 comes from Lemma 9. So if , then
the message-monotone equilibrium strategies () and () satisfy the properties in
Proposition 1.
Proof of Proposition 2. First, I show that is a most informative equilibrium
outcome in Γ by using the selection criterion introduced in Chen, Kartik and Sobel
(2008), the “no incentive to separate” (NITS) condition. This condition is satisfied if the
type-0 sender’s equilibrium payoff is at least as high as the payoff he gets if the receiver
knew his type and responded optimally. In my model, this condition implies that the
type-0 sender’s equilibrium payoff is at least as high as − (0− 0− )2= −2. Proposition
3 in Chen, Kartik and Sobel shows that under the regularity condition (M), only the most
informative equilibrium outcome satisfies NITS. Below, I show that satisfies NITS.
Suppose the sequence of message-monotone equilibrium outcomes convergesto . Let ( () ()) be the equilibrium strategy profile that generates . Let
0 be such that if , then the properties in Proposition 1 holds for the game
Γ . Let 0 = inf. The proof of Proposition 1 shows that 0 is bounded away
from 0 as and approach 0. Hence 0 0. So if 0, Proposition 1 implies that
for any either (i) 1 (0) and (1) = 1 ∈ (0 ) or (ii) 1 = (0) and
(1) = ( (0)) ≤ 1 ∈ (0 ). So (1) ∈ (0 ). Since converges uniformlyto , it follows that (1) ∈ [0 ]. So the type-0 dishonest sender’s payoff in the limitequilibrium outcome is at least as high as the payoff he gets by sending1 and inducing
(1) ∈ [0 ]. So in the limit distribution , the type-0 dishonest sender’s payoff is at
least as high as −2, satisfying NITS. Hence is a most informative equilibrium outcomein Γ.
Let (0 1,..,) be the partition of Ω in the most informative equilibrium in the
CS game Γ. Suppose ( = 1 ) is the receiver’s best response to step , i.e.,
28
= argmaxR −1− (− )
2 (). Note that (1 − 1) is bounded away from 0. That
is, there exists a 0 such that 1 − 1 0.
Suppose () and () are the sequential equilibrium strategies that generate the
outcome and the vector represents (). Suppose is the lowest message that
is sent with positive probability in . Then −1 = 0 and = 1, () = 1 and
− () .
Recall that ( () ()) is the equilibrium strategy profile that generates and
converges to . Let represent (). Since converges to and converges
uniformly to , it follows that for any ∈ (0 ), there exists a such that if and
, then − () − 0.
Let = min0 2 (− ) 0. (2 is defined in the proof of Corollary 1.) By
Corollary 1 and Lemma 6, if , then for all and , implies that
−1 + . Hence in the limit distribution ≥ −1 + for all , i.e.,
every message above is sent with positive probability. Since only messages are sent
with positive probability in the limit distribution, it follows that = −+1 and only
the messages at the top of are sent with positive probability.
Proof of Proposition 4. Since only the most informative equilibrium outcome in
the CS game satisfies the NITS condition, it suffices to show that NITS is satisfied in
any message-monotone equilibrium in Γ0 for 0.
Suppose not. Then there exists a message-monotone equilibrium in Γ0 that violates
NITS. Let () and () be the strategies in this equilibrium. Since NITS is violated,
() 2 for any ∈ . This implies that for any , −1 6= . The
argument in Lemma 2 still applies when = 0, which implies that for any message
∈, −1 6= .
Let =
2. Then, if ≤ , then (+1 −) ≤ for all = 1 − 1 and 1 ,
1− .
Since (1) 2 and 1 , it follows that 1 (1) (1) 2 1 + .
Note that if −1 (−1), then to satisfy the indifference condition of type −1, either
() = (−1) or () −1 + 2. Let 1 = min ∈ : () (1).Then (1−1) = (1) and (1) 1−1 + 2 (1−1) + 2. Since (1−1)
is a weighted average of 1−1 and (1−1) and (1−1) (1−1), we have
1−1 (1−1). Hence 1 (1−1)+ (1)− . Since (1) is a weighted
average of 1 and (1), it follows that 1 (1) (1) 1−1 + 2.
Let ¡
¢denote the th jump in (). We can show, by using arguments similar
to those for 1 , that whenever () jumps, it jumps by more than 2. Moreover,
¡
¢
¡
¢ −1 + 2 and
¡
¢− .
29
Let be the last jump point. Since − and 1 − , it follows that
6= . So . Since −1 + 2, it follows that () ()
if ≤
2. But since () is a weighted average of
() and , it follows that
() (), a contradiction. Hence, if ≤ , any message-monotone equilibrium in
Γ0 must satisfy NITS.
Proof of Proposition 5. Step 1. The sophisticated receiver’s strategy is (·) : → . Say that a vector ∈ represents the strategy (·) if and only if = ()
for = 1 . For a given (·), define () = + (1− ) (). Let be the
collection of strategies (·) such that () ≥ () for ≥ . DefineP
= ∈[0 1] : ∃ (·) ∈ such that represents (·). Note thatP
is convex.
The dishonest sender’s strategy is (·) : Ω → . LetP
= ∈ [0 1]+1 0 =0 0 ≤ 1 ≤ ≤ −1 ≤ = 1 be the collection of that represents a nonde-creasing (·). Note thatP
is convex.
Fix (·) ∈ and . Let () = ( () ). Suppose , ∈, . Then 4 ( ) = ( ) − ( ) = − ( −)
( + − 2 − 2)−(1− ) ( ()− ()) ( () + ()− 2 − 2). So (4)
=
2 ( −)+2 (1− ) ( ()− ()) = 2 ( ()− ()). If (·) ∈ , then(4)
≥ 0 and (·) satisfies the single crossing condition in (;).
The type- dishonest sender’s optimization problem ismax∈ (). Let (| (·))be his best response correspondence. Since () satisfies the single crossing property
in (;) for (·) ∈ , (| (·)) is nondecreasing in the strong set order (Lemma1, Athey (2001)). This implies that there exists a selection (| (·)) ∈ (| (·))for each ∈ Ω such that (·| (·)) is increasing and (·| (·)) can be represented by an ∈P
.
Given (·), let ( (·) (·)) be the sophisticated receiver’s expected payoff if sheplays (·). That is ( (·) (·)) = P
=1 ( (·) ()) where ( (·) ()) =
− R∈ ( ()− )
2 ()−(1− )
R:()=− ( ()− )
2 (). Let (xy) =
( (·) (·)) where ∈ Prepresents (·) and ∈ P
represents (·). Define thesophisticated receiver’s “restricted” best response (·) to (·) as follows: (·| (·))maximizes ( (·) (·)) subject to (·) ∈ .
Note that ex ante optimality implies interim optimality because each ∈ is
sent with strictly positive probability. Note also that the strict concavity of ( (·) )in also implies that given (·), (·| (·)) is unique. (This can be shown by con-tradiction. Suppose there are two functions 1 (·) and 2 (·) ∈ and both are restricted
best responses. Define 3 () = 1 () + (1− ) 2 () ∀ ∈ ∈ (0 1). Then3 (·) ∈ and the strict concavity of ( (·) ) in implies that 3 (·) is a better
30
response, a contradiction.)
Now define the set of vectors that represent best response strategies. Γ (y) = ∈P: represents (·) and ∀ ∈ Ω, () ∈ (| (·)) where (·) is represented by
∈ P. Γ (x) = ∈P
: represents (·| (·)) where (·) is represented by ∈P.Let
P=³P
P
´.Pis a compact, convex subset of +1+. Below, I apply
Kakutani’s fixed point theorem to show that the “restricted” best response correspon-
dence (Γ (·) Γ (·)) has a fixed point.By Lemma 2 in Athey (2001), Γ is convex since
(·| (·)) is nondecreasing in thestrong set order when (·) ∈ . The argument in Lemma 3 in Athey (2001) also shows
that Γ has a closed graph.
Since (| (·)) is unique for = 1 , Γ is convex. That Γ is continuousin can be shown by contradiction. Suppose a sequence
¡xy
¢converges to (xy)
where ∈ Γ¡x¢and ∈ Γ (x). Then there exists an 0 0 ∈ P
0 6= and
(xy0) (xy)+2. Since (·) is continuous in the elements of and , there existsa such that for all ,
¯¡xy0
¢− (xy0)¯ and
¯¡xy
¢− (xy)¯ .
Hence, for , ¡xy0
¢ (xy0)− (xy)+
¡xy
¢, a contradiction.
So the best response correspondence (Γ (·) Γ (·)) is convex and has a closed graph andtherefore has a fixed point by Kakutani’s fixed point theorem.
Step 2: Let (x∗y∗) be a fixed point found in step 1 where ∗ represents ∗ (·) and∗ represents ∗ (·).Let (·) be the sophisticated receiver’s unrestricted best reply to ∗ (·). That is, (·)
maximizes (∗ (·) (·)) without the constraint that (·) ∈ . Note that (·) is uniquesince the receiver’s payoff function is strictly concave. Observe that if ∗ (1) 6= (1),
then ∗ (1) (1); if ∗ () 6= (), then ∗ () (). For any ∈ ,
if ∗ () (), then ∗ () = ∗ (+1) ; if ∗ () (), then ∗ () =
∗ (−1).
Below, I show by contradiction that ∗ () = (), ∀ ∈.
Suppose not. Then there exist adjacent messages,+1 ∈ such that∗ () =
∗ (+1). Since+1 , it follows that ∗ (+1) ∗ (). Let4∗ (+1 ) =
∗ (+1 )− ∗ ( ). Since ∗ () = ∗ (+1), we have (+1 −) + (1− )
(∗ (+1)− ∗ ()) = 0. So4∗ (+1 ) = − (+1 −) (+1 + − ∗ (+1)− ∗ ()).
Note that 4∗ () 0 if and only if − ∗ () () ∗ (+1) −+1. If 4∗ 0,
then ∀ ∈ Ω, the dishonest sender strictly prefers ( ∗ ()) to (+1
∗ (+1)).
Therefore (|∗ () = +1) = 0 and (+1) = +1. Likewise if 4∗ 0, then
(|∗ () = ) = 0 and () = .
31
Consider the following cases.
(1) Suppose ≥ ∗ (). Since +1 and ∗ () ∗ (+1), we have
− ∗ () ∗ (+1) −+1 and 4∗ 0. So (+1) = +1 ≥ ∗ ()
∗ (+1), which implies that ∗ (+1) = ∗ (+2). By repeating the argument, we
have (+2) = +2 ∗ (+2) and so on. So () = ∗ (), a contradiction.
(2) Suppose ∗ () and (|∗ () = ) = 0. Then () = .
(2a) Suppose ∗ () ≤ () = . Then ∗ ()− +1 − ∗ () +1 −∗ (+1), i.e., 4∗ 0. So (+1) = +1 ≥ ∗ () ∗ (+1), which implies
that ∗ (+1) = ∗ (+2). Since +1 ∗ (+1), we are back to case (1).
(2b) Suppose ∗ () () = . Then ∗ () = ∗ (−1) and −1implies that ∗ (−1) ∗ (). Then ∗ (−1)−−1 − ∗ (). So (−1) =
−1 ∗ (−1). By repeating the argument, we have (−2) = −2 ∗ (−2) and
so on. So (0) = 0 ∗ (0), a contradiction.
(3) Suppose ∗ () and (|∗ () = +1) = 0. Then (+1) = +1
(3a) Suppose ∗ (+1) ≥ (+1) = +1. Then ∗ () − +1 − ∗ (+1).
So 4∗ 0 () = +1 ≤ ∗ (+1) ∗ () and we are back to case (2b).
(3b) Suppose ∗ (+1) (+1) = +1. Then ∗ (+1) = ∗ (+2) and we
are back to case (1).
(4) Suppose ∗ () and (|∗ () = ) 0 (|∗ () = +1) 0.
Then the dishonest sender of type ∗ is indifferent between ( ∗ ()) and (+1
∗ (+1)).
That is, 4∗ (+1 ∗ ) = 0. So ++1 = ∗ () + ∗ (+1). Since
∗ (+1)
∗ () and +1 , this implies that ∗ (+1) +1 and ∗ () .
(4a) Suppose (+1) ∗ (+1). Then either +1 = and we have a contradic-
tion or ∗ (+1) = ∗ (+2). Since ∗ (+1) +1, we are back to case (1).
(4b) Suppose (+1) ≤ ∗ (+1). Note that (+1) min+1 ∗ . Since
+1 ∗ (+1), we have +1 ∗ (+1) ≥ (+1) ∗ . Since ∗ () ∗ (+1),
we also have ∗ () ∗ . Note that () max ∗ . Since ∗ () ∗ and
∗ () , this implies that ∗ () (). Hence, either = 1 and we have a
contradiction or ∗ () = ∗ (−1) and we are back to one of the above cases.
To summarize, ∗ () = (), ∀ ∈ . Since ∗ (·) and ∗ (·) are best responsesto each other and ∗ (·) is increasing, a message-monotone equilibrium exists in Γ.
32
References
[1] Athey, S. (2001): “Single Crossing Properties and the Existence of Pure Strategy
Equilibria in Games of Incomplete Information.” Econometrica, Vol. 69, No. 4, 861-
889.
[2] Banks, J. S. and J. Sobel (1987): “Equilibrium Selection in Signaling Games.”
Econometrica, Vol 55, 647-661.
[3] Blume, A., Y-G. Kim and J. Sobel (1993): “Evolutionary Stability in Games of
Communication.” Games and Economic Behavior, 5 (4), 547-575.
[4] Cai, H. and J. Wang (2006): “Overcommunication in Strategic Information Trans-
mission Games.” Games and Economic Behavior, 56 (1), 7-36.
[5] Crawford, V. (2003): “Lying for Strategic Advantage: Rational and Bounded Ra-
tional Misrepresentation of Intentions.” American Economic Review, 93, 133-149.
[6] Crawford, V. and J. Sobel (1982): “Strategic Information Transmission.” Econo-
metrica, Vol. 50, No. 6, 1431-1451.
[7] Chen, Y., N. Kartik and J. Sobel (2008) :“Selecting Cheap-Talk Equilibria.” Econo-
metrica, Vol. 76, No 1, 117-136.
[8] Farrell, J. (1993): “Meaning and Credibility in Cheap-talk Games.” Games and
Economic Behavior, 5, 514-531.
[9] Forsythe, R., R. Lundholm and T. Rietz (1999): “Cheap Talk, Fraud, and Adverse
Selection in Financial Markets: Some Experimental Evidence.” Review of Financial
Studies, Vol. 12, No. 3, 481-518.
[10] Green, J. and N. Stokey (2007): “A Two-person Game of Information Transmission.”
Journal of Economic Theory, 135, 90-104.
[11] Kartik, N. (2007): “Information Transmission with Almost-Cheap Talk.” Mimeo,
University of California at San Diego.
[12] Kartik, N. (2008): “Strategic Communication with Lying Cost.” forthcoming in
Review of Economic Studies.
[13] Kartik, N., M. Ottaviani and F. Squintani (2007): “Credulity, Lies and Costly Talk.”
Journal of Economic Theory, 134 (1), 93-116.
33
[14] Kreps, D. and R. Wilson (1982): “Sequential Equilibria.” Econometrica, Vol. 50,
No. 4, 863-894.
[15] Kreps, D. and R. Wilson (1982): “Reputation and Imperfect Information.” Journal
of Economic Theory, 27 (2), 253-279.
[16] Lo, P.-Y. (2006): “Common Knowledge of Language and Iterative Admissibility in
a Sender-Receiver Game.” Mimeo, Brown University.
[17] Malmendier, U. and D. Shanthikumar (2007):“ Are Small Investors Naive about
Incentives?” Journal of Financial Economics, 85, 457-489.
[18] Manelli, A. (1996): “Cheap Talk and Sequential Equilibria in Signaling Games.”
Econometrica, Vol. 64, No. 4, 917-942.
[19] Matthews, S. (1989): “Veto Threats: Rhetoric in a Bargaining Game.” Quarterly
Journal of Economics, 104, 347-369.
[20] Matthews, S., M. Okuno-Fujiwara and A. Postlewaite (1991): “Refining Cheap-Talk
Equilibria.” Journal of Economic Theory, 55, 247-273.
[21] Michaely, R. and K. Womack (2005): “Brokerage Rcommendations: Stylized Char-
acteristics, Market Responses and Biases.” in Thaler, R (Eds) Advanced in Behav-
ioral Finances II.
[22] Milgrom, P and J. Roberts (1982): “Predation, Reputation, and Entry Deterrence.”
Journal of Economic Theory, 27 (2), 280-312.
[23] Morgan, J. and P. Stocken (2003): “An Analysis of Stock Recommendation.” Rand
Journal of Economics, Spring 2003, 34(1), 183-203.
[24] Morris, S. (2001): “Political Correctness.” Journal of Political Economy, Vol. 109,
No. 2, 231-265.
[25] Ottaviani, M. and F. Squintani (2006): “Naive Audience and Communication Bias.”
International Journal of Game Theory, December, 35, 129-150.
[26] Sobel, J. (1985): “A Theory of Credibility.” Review of Economic Studies, 52, 557-
573.
[27] Warneryd, K. (1993): “Cheap Talk, Coordination, and Economic Stability.” Games
and Economic Behavior, 5, 532-546.
34