preparing for the worst but hoping for the best: robust …apa522/slides-rbp.pdf · 2020. 2....
TRANSCRIPT
Preparing for the Worst but Hoping for the Best:Robust (Bayesian) Persuasion
Piotr Dworczak Alessandro Pavan
February 2020
Motivation
Bayesian persuasion/ information design
designer knows agents’ sources of information
trusts her ability to coordinate Receivers on actions most favorable to her
optimal information structure sensitive to fine details of agents’ beliefs
In many problems of interest,
agents’ sources of information (both before and after receiving Sender’sinformation) unknown
Sender may not trust her ability to coordinate Receivers
Quest for robustness
This Paper
Novel solution concept that accounts for such uncertainty/ambiguity
Lexicographic approach to the problem
Step 1: “Preparing for the worst”
designer seeks to protect herself against possibility that Nature providesinformation and coordinates agents on actions most adversarial todesigner
Step 2: “Hoping for the best”
designer maximizes over all worst-case optimal policies assumingNature and Receivers play favorably
Robust solutions
best-case optimal among worst-case optimal ones
max-max over max-min
(also maximize λ infπ∈Π v(q, π) + (1− λ)v(q, ∅), for λ large)
Results
Separation theorem
Properties of robust solutions
Implications for various persuasion models
Conditionally-independent signals (online supplement)
Literature
Bayesian persuasion
...Calzolari and Pavan (2006), Brocas and Carillo (2007), Rayo-Segal (2010),Kamenica and Gentzkow (2011), Ely (2017), Dworczak-Martini (2019)...Surveys
Bergemann and Morris (2019)Kamenica (2019)
Information design with adversarial coordination
Inostroza and Pavan (2018)Mathevet, Perego, Taneva (2019)Morris et al. (2019)Ziegler (2019)
Persuasion with unknown beliefs
Kolotilin et al. (2017)Laclau and Renou (2017)Guo and Schmaya (2018)Hu and Weng (2019)Kosterina (2019)
Max-max over max-min design
Borgers (2017)
Plan
1 Introduction
2 Model
3 Robust Solutions
4 Separation Theorem
5 Corollaries
6 Applications
7 Conditionally-independent Robust Solutions (another day)
Model
Model: Environment
Payoff-relevant state: ω ∈ Ω (finite)
Prior: µ0 ∈ ∆Ω
Sender’s “signal”
q : Ω→ ∆SS: signal realizations
(Reduced-form description of) Sender’s payoff, given induced posterior µ ∈ ∆Ω
V (µ) : highest payoff
V (µ): lowest payoff
Difference between V and V :
strategy selection (multiple Receivers)
tie-breaking (single Receiver)
Model: Sender’s uncertainty
Nature designs information structure
π : Ω× S → ∆RR: signal realizations
Multiple Receivers
discriminatory disclosures embedded into derivation of V (µ)given common posterior µ, Nature provides (possibly private) signals to theagents and coordinates them on course of action most adversarial to Sender(among those consistent with assumed solution concept)e.g.., Bayes-correlated eq. given µ
Conditioning on Sender’s signal
information acquisition (after hearing from Sender)correlated noisemaximal concern for robustness
Online Appendix: conditionally independent signals
Plan
1 Introduction
2 Model
3 Robust Solutions
4 Separation Theorem
5 Corollaries
6 Applications
7 Conditionally-independent Robust Solutions
Robust Solutions
Robust Solutions
Sender’s expected payoffs when
Sender selects signal q
Nature selects signal π
v(q, π) ≡∑
Ω
∫S
∫R
V (µs,r0 )dπ(r |ω, s)dq(s|ω)µ0(ω)
v(q, π) ≡∑
Ω
∫S
∫R
V (µs,r0 )dπ(r |ω, s)dq(s|ω)µ0(ω)
where µs,r0 is common posterior obtained from (q, π)
Worst-case optimality
Definition 1
Signal q is worst-case optimal if, for all signals q′,
infπ
v(q, π) ≥ infπ
v(q′, π).
maximal payoff guarantee
Worst-case optimality
Given any posterior µ ∈ ∆Ω, Sender’s (lowest) payoff if, starting from µ, statefully revealed
V full(µ) ≡∑
Ω
V (δω)µ(ω)
where δω is Dirac measure assigning prob 1 to ω.
Remark 1Since both Nature and Sender can reveal state, signal q is worst-case optimal iff
infπ
v(q, π) = V full(µ0)
W : set of worst-case optimal signals
non-empty (full disclosure is worst-case optimal)
Robust Solutions
Definition 2Signal qRS is robust solution if it maximizes v(q, ∅) over W .
Lexicographic preferences
max-max over max-min policies
step 1: max-min (worst-case optimal policies)
step 2: max-max (highest payoff if Nature and Receivers play favorably)
Clearly, qRS also maximizes supπ v(q, π) over W
However, Sender prefers to provide information herself rather than counting onNature to do it
Robust Solutions
Lemma 1Signal qRS is robust solution iff distribution over posterior beliefs ρRS ∈ ∆∆Ω that qRS
induces maximizes ∫V (µ)dρ(µ)
over set of distributions over posterior beliefs W ⊂ ∆∆Ω satisfying (a) Bayes plausibility∫µdρ(µ) = µ0
and (b) “worst-case optimality” (WCO)∫lco(V )(µ)dρ(µ) = V full(µ0)
Robust vs Bayesian Solutions
Bayesian solutions:
qBP maximizes v(q, ∅) over Q (feasible signals)induced distribution over posterior beliefs ρBP ∈ ∆∆Ω maximizes∫
V (µ)dρ(µ) over all distributions ρ ∈ ∆∆Ω satisfying Bayes plausibility,∫µdρ(µ) = µ0
Robust solutions:
qRS maximizes v(q, ∅) over W ⊂ Q (worst-case optimal signals)induced distribution over posterior beliefs ρRS ∈ ∆∆Ω maximizes∫
V (µ)dρ(µ) over all distributions ρ ∈ ∆∆Ω satisfying, in addition to Bayesplausibility,
∫µdρ(µ) = µ0, the WCO constraint∫
lco(V )(µ)dρ(µ) = V full(µ0)
Plan
1 Introduction
2 Model
3 Robust Solutions
4 Separation Theorem
5 Corollaries
6 Applications
7 Conditionally-independent Robust Solutions
Separation Theorem
Separation Theorem
Theorem 1Let
F ≡ B ⊆ Ω : V (µ) ≥ V full(µ) ALL µ ∈ ∆B,Then,
W = ρ ∈ ∆∆Ω : ρ satisfies BP and supp(µ) ∈ F ALLµ ∈ supp(ρ)
Therefore, ρRS ∈ ∆∆Ω is robust solution iff ρRS maximizes∫V (µ)dρ(µ)
over all distributions over posterior beliefs ρ ∈ ∆∆Ω satisfying BP and s.t., for anyµ ∈ supp(ρ), supp(µ) ∈ F .
Separation Theorem
Idea:
suppose Sender induces posterior µ with supp(µ) = B for which there existsη ∈ ∆B s.t. V (η) < V full(η)
starting from µ, Nature can induce η w/ strict positive probability
starting from µ, Nature can bring Sender’s payoff strictly below V full(µ)
because Nature can respond to any other posterior µ′ ∈ supp(ρ) by fullydisclosing state, ∫
lco(V )(µ)dρ(µ) < V full(µ0)
policy ρ not worst-case optimal
KG’s prosecutor example
10
1
Expected payoff
after Nature's disclosure
Expected payoff
before Nature's disclosure
Figure: Prosecutor example
Plan
1 Introduction
2 Model
3 Robust solutions
4 Separation theorem
5 Corollaries
6 Applications
7 Conditionally-independent Robust Solutions
Corollaries
Existence
Corollary 1A robust solution always exists.
existence guaranteed by possibility for Nature to condition on realization ofSender’s signal
State separation
Corollary 2
Suppose there exist ω, ω′ ∈ Ω and λ ∈ (0, 1) s.t.
V (λδω + (1− λ)δω′) < λV (δω) + (1− λ)V (δω′),
Then any robust solution must separate ω and ω′.
Assumption: there exists some belief supported on ω, ω′ under which Sender’spayoff below full disclosure
Conclusion: ALL posterior beliefs must separate ω and ω′.
Robustness of Bayesian Solutions
Corollary 3
Bayesian solution ρBP is robust iff for any µ ∈ supp(ρBP) and any η ∈ ∆Ω s.t.supp(η) ⊂ supp(µ),
V (η) ≥ V full(η)
Binary state: any robust solution
full disclosure
Bayesian solution
Worst-case optimality preserved under more disclosure
Corollary 4
W closed under Blackwell dominance: If ρ′ ∈W , and ρ Blackwell dominates ρ′, thenρ ∈W .
Result not true in case of conditionally independent signals
Informativeness of Robust vs Bayesian solutions
Corollary 5Given any Bayesian solution ρBP , there exists robust solution ρRS s.t. either ρRS and ρBPnot comparable in Blackwell order, or ρRS Blackwell dominates ρBP .
If Bayesian solution ρBP is Blackwell more informative than robust solution ρRS ,then ρBP also robust
Reason why robustness calls for more disclosure
little to do with indifference on Sender’s part
concealing information gives Nature more room for adversarial design
If Bayesian solution ρBP is not robust and is strictly Blackwell dominated by robustsolution ρRS , then ρRS separates states that are not separated under ρBP
robustness never calls for MPS with same supports
Concavification
Let vlow := minω∈Ω V (δω)− 1
Auxiliary function
VF (µ) =
V (µ) if supp(µ) ∈ F and V (µ) ≥ vlow
vlow otherwise
Corollary 6A feasible distribution ρ ∈ ∆∆Ω is robust iff∫
VF (µ)dρ(µ) = co(VF )(µ0).
Furthermore, there always exists a robust solution ρ with supp(ρ)| ≤ |Ω|.
VF upper-semi-continuous: results follow from arguments similar to those in BPliterature
Plan1 Introduction
2 Model
3 Robust Solutions
4 Separation Theorem
5 Corollaries
6 Applications
7 Conditionally-independent Robust Solutions
Applications
Limits to Third-degree Price Discrimination
Bergemann, Brooks, Morris (2015)
designer segments market to maximize combination of CS and PS
Proposition 1Suppose Pareto weight on seller’s surplus strictly positive. Full disclosure is uniquerobust solution. When, instead, designer cares only about consumer surplus, optimalsignal is BBM solution.
For any posterior µ assigning positive measure to more than one buyer’s value,Nature can construct posterior η inducing seller to ask highest price in supp(µ)thus inducing no trade.
Total surplus under η strictly below full-information
(if weight on PS strictly positive)
Privately Informed Receiver
Guo and Shmaya (2019)
Exogenous price p ∈ (0, 1)
Seller’s payoff is 1 if trade, 0 otherwise
Buyer’s exogenous private information f (t|ω) – MLRP
Proposition 2
Any robust solution separates states ω ≤ p from states ω′ > p
Starting from any µ s.t. supp(µ) ⊃ ω, ω′ with ω < p < ω′, Nature can induceposterior η s.t. Eη[ω] < p thus inducing no trade
Sender’s payoff given η below full information
Given ρ ∈ ∆∆Ω, Nature can bring Sender’s payoff below V full(µ0)
Policy ρ not worst-case optimal
Lemons problem
Seller’s value: ω (known to seller)
Buyer’s value: ω + ∆, with ∆ > 0 (unknown to buyer)
Exogenous price p drawn from U[0, 1]
Trade:
p ≥ ω
Eµ[ω|ω ≤ p] + ∆ > p
Seller designs info structure
Proposition 3
Under any robust solution ρRS , for any µ, µ′ ∈ supp(ρRS),diam(supp(µ)), diam(supp(µ′)) ≤ ∆ but diam(supp(µ) ∪ supp(µ′)) > ∆.
Robust solutions are minimally informative among those that eliminate adverseselection!
Supermodular Games
Continuum of Receivers
ai = 0, 1A ∈ [0, 1]: aggregate “attack”
Payoff from not attacking: 0
Payoff from attacking g > 0 if A ≥ ωb < 0 if A < ω
Designer’s payoff: 1− A
Bayesian solution: Upper censorship
Reveals each ω < 0 w.p. γBP ∈ (0, 1) (w.p. 1− γBP , reveals nothing)Conceals all ω > 0
Proposition 4
Robust solution reveals ω < 0 w.p. γ∗ > γBP , conceals all ω ∈ [0, 1], reveals all ω > 1with certainty.
Revelation of upper dominance region:
else Nature constructs η inducing A = 1 also for ω > 1
Plan
1 Introduction
2 Model
3 Robust Solutions
4 Separation Theorem
5 Corollaries
6 Applications
7 Conditionally-independent Robust Solutions
Conditionally Independent Signals
Conditionally-independent Robust Solutions
Nature cannot condition on realization of Sender’s signal
π : Ω→ ∆R
so far: π : Ω× S → ∆R
Existence of CI-Robust Solutions
A robust solution may fail to exist
A robust solution exists if V is continuous
DefinitionA feasible distribution ρ ∈ ∆∆Ω is a weak CI-robust solution if it maximizes∫
V (µ)dρ(µ)
over cl(WCI ), where cl(WCI ) is closure (in weak∗ topology) of set of CI-worst-casesolutions
TheoremA weak solution exists no matter V .
Separation under CI-Robust Solutions
Sufficient conditions for state separation under CI-robust solutions
weaker than those for robust solutions
whenever ω and ω′ must be separated under CI-robust solutions, they mustbe separated under robust solutions
Full Disclosure and CI-robustness of Bayesian solutions
Sufficient conditions for
full-disclosure to be unique CI-robust solution
all distributions to be CI-worst-case optimal
CI-robust solutions: Binary state
Unlike robust solutions, CI-robust solutions for binary states need not coincide with
Bayesian solutions
full disclosure
Blackwell Informativeness of CI-robust solutions
Unlike robust solutions, CI-robust solutions need not be Blackwell more informativethan Bayesian solution
Example in which unique Bayesian solution is Blackwell strictly more informativethan all CI-robust solutions
Nature cannot engineer MPS conditional on s
ρRS may be Blackwell less informative than ρBP an yet Nature may not beable to inflict Sender same payoff as under ρBP
Conclusions
Bayesian persuasion when Sender uncertain
Receivers’ informationstrategy selection
Robust solutions
best-case optimal among worst-case optimal onesmax-max over max-min criterion
Separation theorem
any pair of states over which Nature can construct belief yielding less thanfull info are separated
Robustness
more disclosure
but only through more separation (not MPSs over same supp)
Relatively simple two-step design procedure
step 1: Nature designs info to minimize Sender’s payoffstep 2: designer solves standard persuasion problem on restricted set ofworst-case optimal policies
Implications for applications
Most Important Slide
THANKS!