electoral agency in the lab leif helland & lars monkerud nsm bi

15
Electoral agency in the lab Leif Helland & Lars Monkerud NSM BI

Post on 21-Dec-2015

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Electoral agency in the lab Leif Helland & Lars Monkerud NSM BI

Electoral agency in the lab

Leif Helland & Lars Monkerud

NSM BI

Page 2: Electoral agency in the lab Leif Helland & Lars Monkerud NSM BI

Motivation

• Liberal concept of democracy• Roots in liberal political philosophy (Mill, Locke) • Democracy negatively defined: citizens should be free from restraint and exploitation

by state power: a) constitutional constraints, b) accountable leadership• Accountability: achieved by retrospective voting for competing alternatives

(Schumpeter 1942; Popper 1982, Riker 1984).• A contemporary economists formulation (Myerson 1999):

“When elected leaders use political power for their own profit, we call it corruption or abuse of power, or in its most extreme form, tyranny. One of the basic motivations of democracy is the hope that electoral competition should reduce such political abuse of power, below what would occur under an authoritarian system, just as market competition reduces oligopolistic profits below monopoly levels”

• What we want is: 1. An explicit model of electoral agency: a) based on parsimonious motivational

assumptions, b) allowing for moral hazard on behalf of elected representatives, and c) allowing voters to select representatives through elections (Austen-Smith & Banks 1989, Banks & Sundaram 1993, Besley 2006)

2. Data on how real voters behave in situations resembling such a model.

Page 3: Electoral agency in the lab Leif Helland & Lars Monkerud NSM BI

Model (1)

• Two stages t=(1,2) with an election in between. • Incumbents are of two possible types i {g,b}. • Let {s,1} signify a productivity parameter with 0<s<<1, where is a

discount factor.• At the beginning of t=1 random draws determine the type of the first stage

incumbent and the nature of the productivity shock for both stages, with commonly known probabilities Pr(i=g)= and Pr(=s)=q.

• Type and productivity is private information to the incumbent.• Let q > ½ by assumption. • The t=1 incumbent selects public production for t=1, and production is

publicly announced. • There is an election at the end of t=1.• If the challenger wins, challenger type is randomly determined at the

beginning of t=2, with commonly known probability Pr(i=g)=. • If the challenger looses, the t=1 incumbent continues as incumbent in t=2.• The t=2 incumbent selects public production for t=2. Production is publicly

announced, payoffs are distributed and the game ends.

Page 4: Electoral agency in the lab Leif Helland & Lars Monkerud NSM BI

Model (2)• Let payoff to an incumbent of type b be Vb=r1+r2, where rt is rents

(diversion of public funds for private ends) in stage t

• Let the type g incumbent set rt=0 for t=(1,2)

• Let voter payoff be Ut=(1- )y+xt, where is the tax rate, y is income before tax, xt is public production in stage t, and >1

• Let the budget restriction of the incumbent be (y-rt)=xt.

• Finally, let maximal rents equal rt= y.

• It follows readily that a b-type incumbent always extracts maximal rents in t=2.

• Note also that r1= y dominates r1=0 for a b-type incumbent

Rent taking in t=1 Reelected Not reelected

r1=0 y 0

r1= y (1+)y y

Page 5: Electoral agency in the lab Leif Helland & Lars Monkerud NSM BI

Model (3)

• It may still be in the interest of a b-type incumbent to mimic a g-type incumbent (by setting 0<r1<y) in order to be reelected (and take r2=y)

• The central question becomes: what is the rationally updated voter belief after observing production x1=sy?

• Let 0<<1 be the probability that a b-type incumbent facing =1 mimics a g-type incumbent facing =s

• Bayes rule now provides an answer:

π)λq)(1(1qπ

b)Pr(b)|Pr(sτrH)Pr(g)|Pr(sτr

g)Pr(g)|Pr(sτrΠy)s|Pr(g

Now, assuming voters use pure cutoff strategies; in which they reelect only if >, we appreciate that as long as q > ½ all incumbents that produces either x1=sy or x1=y will be reelected unanimously while no incumbent that produces x1=0 will get any votes. As long as >s mimicking is profitable for a b-type incumbent.

Page 6: Electoral agency in the lab Leif Helland & Lars Monkerud NSM BI

Design (1)

• q=.55 or q=.85, =.20 =.23 (Marginal update) or =.59 (Substantial update)

• Endowment per stage 100; =1.1; =0.5, s=0.5

• Does update have an effect on behavior?

• Does group learning have an effect on behavior?

• Do subjects learn over time?

• How?

Page 7: Electoral agency in the lab Leif Helland & Lars Monkerud NSM BI

Design (2)

To root out social distributive preferences and intentions– Electorates matched in absolute stranger protocol– Politician programmed as an automaton

Page 8: Electoral agency in the lab Leif Helland & Lars Monkerud NSM BI

Non-parametric tests (1)

Games 1-7 (2 tailed MW-tests)

Update effect (at P=25)Decisive voters0.17 (p=0.09)Voters in electorates 0.14 (p=0.24 )Group effect(at P=25)Marginal update0.12 (p=0.26)Substantial update0.15 (p=0.17)

Page 9: Electoral agency in the lab Leif Helland & Lars Monkerud NSM BI

Non parametric tests (2)

Games 1-5 & 16-20 (2 tailed MW tests)

Update effect5 first games0.24 (p=0.06)5 last games0.26 (p=0.01)

Learning effectMarginal update0.08 (p=0.46 )Substantial update0.14 (p=0.23)

Page 10: Electoral agency in the lab Leif Helland & Lars Monkerud NSM BI

Non-parametric tests (3)

Average mistakes (decisions): Absolute deviations of registered beliefs from equilibrium beliefs

Page 11: Electoral agency in the lab Leif Helland & Lars Monkerud NSM BI

Non-rational learning rules

• We look at two variants; fictitious play and payoff reinforcement learning

• Fictitious play variable: Dummy that takes value 1 if fictitious beliefs favor equilibrium behavior and zero otherwise.

• Uses all games; also test games

• Payoff reinforcement: Continuous variable between zero and one that captures reinforcement (of previous payoff experiences) towards equilibrium behavior (Roth & Erev 1995).

• Estimated from the first two games with earnings

• What is reinforced is actions given states

• Strength of attractions is set at 55 (the order of maximal payoffs)

Page 12: Electoral agency in the lab Leif Helland & Lars Monkerud NSM BI

Regression results

Update has a significantly positive effect on probability of being in equilibrium (at P1=25) with no control for non-rational learning

Controlling for non-rational learning, this effect is halved; and becomes far from significant

Reinforcement has a strong, positive and highly significant effect

Fictitious play does not have significant effects

Page 13: Electoral agency in the lab Leif Helland & Lars Monkerud NSM BI

Simulations (1)

Substantial update

A player facing a sequence of 1.000 independent games

Results are averaged over 10.000 sequences

Parameters; initial attractions; and strength of attractions as in experiment

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 200 400 600 800 1000

Game

Pro

bab

ilit

y o

f eq

uil

ibri

um

dec

isio

n

First stage production=50 (m) First stage production=25 (m)

First stage production=00 (m)

Page 14: Electoral agency in the lab Leif Helland & Lars Monkerud NSM BI

Simulations (2)Marginal update

A player facing a sequence of 1.000 independent games

Results are averaged over 10.000 sequences

Parameters; initial attractions; and strength of attractions as in experiment

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 200 400 600 800 1000

Game

Pro

bab

ilit

y o

f eq

uil

ibri

um

dec

isio

n

First stage production=50 (m) First stage production=25 (m)

First stage production=00 (m)

Page 15: Electoral agency in the lab Leif Helland & Lars Monkerud NSM BI

Conclusions

• "...the typical citizen drops down to a lower level of mental performance as soon as he enters the political field. He argues and analyzes in a way which he would readily recognize as infantile within the sphere of his real interests. He becomes primitive again. His thinking is associative and affective ... [This] may prove fatal to the nation.“ (Schumpeter 1942:262)

• Design: Control for “affective thinking” (distributive social preferences and intentions)

• Results: Thinking is “associative”, almost Pavlovian

• Under favorable conditions, this does not matter for “the fate of the Nation” (payoff reinforcement looks as if players were playing an equilibrium)

• Under less favorable conditions, it matters for “the fate of the Nation” (payoff reinforcement does is not conducive to optimal discipline and selection)

• Creates a challenge for field data studies that find a pattern consistent with equilibrium: It need not imply that data were generated by a bayesian perfect equilibrium