chapter 2. the state of nature

35
Chapter 2. The State of Nature Consider the case of a "society" consisting of two individuals who live on two deserted islands. They have no interaction with one another. If each one acts efficiently, then they will each secure the best outcome that is available to them. The resulting state will be Pareto-optimal, simply because there are no further actions available to either agent that can improve his or her own condition. But if one of them swims over to the other's island, the situation changes entirely. They are now in a position to affect one another through their actions. More specifically, their actions may now generate externalities – effects on the other that are not factored into their calculations. An agent may select an action that, while efficient for him, is harmful to the other. If both agents do this to each other, then they can easily wind up with a social state that is Pareto- suboptimal. Thus the actions that are best for each are not necessarily best for all. For example, suppose that these two agents (call them Bill and Ted), are surviving by fishing. In the morning each one of them sets traps at the mouth of the streams on his side of the island; in the evening each returns to collect all the fish that have been caught over the course of the day. One morning, as Bill is setting his traps, he comes across one of Ted's traps, full of fish. So instead of setting up one of his own traps, he just takes Ted's fish. The next day he finds a few more of Ted's traps, and so cleans them out as well. However, when he returns to his own, he finds that some of them are empty – Ted must be doing the same thing to him. That evening, Bill realizes that it is a waste of time for him to set any traps at all. The streams are so far apart that he can't keep an eye on them, so there is no way of stopping Ted from taking his fish. He would be better off if he just concentrated on taking Ted's fish, and didn't bother trying to catch any of his own. So the next day, he sets out to find Ted's traps – only to discover that there aren't any. On the other side of the island, Ted, who has been thinking the same thing as Bill, experiences a similar disappointment. After a few days of this, they both give up. Each of them decides to set just one trap, and spend the day keeping an eye on it. This arrangement is stable, but it is much worse for both of them. They each get fewer fish than they did before they started poaching from each other's traps. But neither one has an incentive to deviate from this arrangement, because setting up another trap would be a lot of work, and would require leaving the first trap

Upload: others

Post on 03-Feb-2022

2 views

Category:

Documents


0 download

TRANSCRIPT

Chapter 2. The State of Nature

Consider the case of a "society" consisting of two individuals who live on two deserted

islands. They have no interaction with one another. If each one acts efficiently, then they will

each secure the best outcome that is available to them. The resulting state will be Pareto-optimal,

simply because there are no further actions available to either agent that can improve his or her

own condition. But if one of them swims over to the other's island, the situation changes entirely.

They are now in a position to affect one another through their actions. More specifically, their

actions may now generate externalities – effects on the other that are not factored into their

calculations. An agent may select an action that, while efficient for him, is harmful to the other.

If both agents do this to each other, then they can easily wind up with a social state that is Pareto-

suboptimal. Thus the actions that are best for each are not necessarily best for all.

For example, suppose that these two agents (call them Bill and Ted), are surviving by

fishing. In the morning each one of them sets traps at the mouth of the streams on his side of the

island; in the evening each returns to collect all the fish that have been caught over the course of

the day. One morning, as Bill is setting his traps, he comes across one of Ted's traps, full of fish.

So instead of setting up one of his own traps, he just takes Ted's fish. The next day he finds a few

more of Ted's traps, and so cleans them out as well. However, when he returns to his own, he

finds that some of them are empty – Ted must be doing the same thing to him. That evening, Bill

realizes that it is a waste of time for him to set any traps at all. The streams are so far apart that

he can't keep an eye on them, so there is no way of stopping Ted from taking his fish. He would

be better off if he just concentrated on taking Ted's fish, and didn't bother trying to catch any of

his own. So the next day, he sets out to find Ted's traps – only to discover that there aren't any.

On the other side of the island, Ted, who has been thinking the same thing as Bill, experiences a

similar disappointment. After a few days of this, they both give up. Each of them decides to set

just one trap, and spend the day keeping an eye on it. This arrangement is stable, but it is much

worse for both of them. They each get fewer fish than they did before they started poaching from

each other's traps. But neither one has an incentive to deviate from this arrangement, because

setting up another trap would be a lot of work, and would require leaving the first trap

Normative Economics/Chapter 2 42

unsupervised. Without some reasonable assurance that they will benefit from their labour, neither

is willing to put in the effort required to catch more fish.

What this little story shows is that Pareto-optimal outcomes are not at all guaranteed as a

consequence of social interaction. Naturally, they can be achieved. There are all sorts of things

that Bill and Ted can do to get themselves out of the situation they are in. But in order to

understand what these devices are, and how they function, is necessary to understand more

clearly the structure of the situation they are in, and how likely it is that such situations will arise.

In order to do this, it is necessary to develop a somewhat more formal model of the way that

individuals chose their actions, and the way that their choices interact. The theory that attempts to

model the former is known as decision theory, the latter as game theory. This chapter begins

with a brief presentation of these two models, followed by an analysis of conditions under which

social interaction can be expected to yield Pareto-suboptimal states.

2.1 Decision theory

Decision theory is the formal model of what was earlier referred to as the maximizing

conception of practical rationality. This conception of rationality is grounded in what

philosophers refer to as belief-desire psychology. The underlying idea is simply that agents act

in order to achieve the goals that they desire most, and that they use beliefs in order to determine

how best to achieve these goals. Suppose I have a desire for a cup of coffee. I use my beliefs

about where the various cafés are located, and how close they are to me, in order determine how

best to satisfy my desire. I remember that there is one at the corner of Euclid and College St., and

so I begin to walk south. I arrive at the café, and drink my cup of coffee.

In this example, the events that are relevant to the success of my action can be divided

into three categories:

• the action: This is an event over which I have direct control. In the café example,

walking south is the action.

• the outcome: This is the event that ultimately occurs as a result of my action. In this

example, drinking coffee is the outcome.

Normative Economics/Chapter 2 43

• the state: The state is a catch-all category that refers to all of the conditions that must be

in place in order for my action to produce the outcome that ultimately occurs. For

example, the fact that there actually is a café on the corner of Euclid and College, that

they have coffee, and that they are willing to sell it, are all states of affairs that must

obtain in order for the action of walking south to produce the outcome of my drinking a

cup of coffee.

The goal of practical deliberation is to select an action (e.g. I need to decide whether to

walk north, south, east, or west). I draw upon my beliefs and desires in order to do this. First, I

consider the outcomes that could possibly arise as a result of the actions that are available to me.

I then rank these outcomes in terms of which is the most desirable, and consider which action

will bring about this outcome. In order to decide this, I must determine which of the possible

states is most likely to obtain. I will then select the action that is most likely to bring about the

most desired outcome. Thus a desire for an outcome hooks up with a belief about the state in

order to recommend a particular action. In order for the action to be a success, the state must

actually correspond to my belief, and the outcome must actually satisfy my desire.

This kind of decision procedure is represented in Figure 2.1. The diagram shows a

decision problem with three actions available {a1, a2, a3}, three possible states {s1, s2, s3} and

three possible outcomes {o1, o2, o3}. Things start off with nature 'choosing' which state will

obtain. This puts the agent at one of his three decision nodes. However, without knowing which

node he is at, the agent is unable to determine which action will produce which outcome. For

instance, if the state is s1 (i.e. nature has 'chosen' branch s1), then action a1 will produce outcome

o1. However, if the state is s2, choosing a1 will produce o2, and if the state is s3, a1 will produce o3.

In order to show that the agent does not know which decision node he is at, the nodes are

connected together with a dashed line. This group of nodes is referred to as an information set.

In order to get the outcome that he wants, the agent has to guess which state obtains (i.e. which

node he is at).

Normative Economics/Chapter 2 44

s1

s2

s3

o1

o3

o2

o2

o1

o3 o3

o2

o1

a1

a2

a3

a1

a2

a3

a1

a2

a3

nature

agent

Figure 2.1 Decision tree

One way of representing the agent's decision procedure is to regard it as a matter of

pruning the decision tree. The agent's beliefs and desires can be introduced into the diagram

simply by cutting off some branches. Suppose that the agent is able to ascertain which state

obtains, and so believes, for example, that nature has 'chosen' s2. This allows him to cut the s1 and

s3 branches off at nature's node. What is left of the decision tree after this "naturalistic pruning" is

shown in Figure 2.2

s2

o2

o1

o3

a1

a2

a3

Figure 2.2 Pruned tree

Normative Economics/Chapter 2 45

Once the problem is simplified in this way, the agent knows which node he is at, and so

knows which actions will produce which outcomes. As a result, he can simply choose the action

that leads to the outcome that she most desires. This can be thought of as a final pruning, in

which each branch leading to an outcome other than the one most desired is cut off. What is left

after this "desiderative" pruning is a single action, which the agent can then proceed to perform.

This model provides a highly simplified representation of what is known as the

instrumental conception of practical rationality. It is called instrumental because, according to

this view, the central function of practical deliberation is not to determine which goals the agent

should pursue, but merely how best to achieve those that he has. Thus reason is more like a tool,

or an instrument – it can help us to do what we want to do, but it cannot tell us what we should

do. This is often expressed in the vocabulary of “means” and “ends.” Here, actions are regarded

as means and outcomes as ends. The function of rationality is to use beliefs about the state, in

order to determine that the most appropriate means are to the realization of the ends that one

desires. One can find the “classical” articulation of this view in the work of Thomas Hobbes

(1651), along with an important critical analysis in that of Kant (1785), who discusses

instrumental rationality in terms of what he calls “hypothetical imperatives.”

The “means-ends” vocabulary becomes somewhat limiting, however, when the agent is

less than certain about which state will obtain. In this case, she can only assign probabilities to

the occurrence of various states, and so will not know with certainty which action will bring

about the desired outcome. She will therefore need to select the action that gives her the best

chance of getting her favoured outcome. But even then, things are not quite so simple. She may

face a situation in which she must choose between an action that gives her a reasonable chance of

getting her favoured outcome, but also some chance of getting an outcome that is disastrous for

her, and an action the gives her a guarantee of at least a mediocre outcome. In order to choose

between these two options, she will have to consider how much she really wants the best

outcome, and whether pursuing it instead of the mediocre outcome is worth the risk of disaster.

So in the same way that beliefs will have to be given probabilities, her desires will have to be

assigned priority levels.

Normative Economics/Chapter 2 46

Figure 2.3 depicts a decision tree for the agent that shows the probability with which the

agent believes that each state will obtain between angle brackets <x>, and the intensity with

which the agent desires that each outcome obtain between parentheses (x). Since one of the three

states must obtain, the probabilities assigned to nature's move must add up to 1. On the other

hand, the sum of the numbers used to represent the intensity of the agent's desires is not

important, since the only thing that matters is the strength of these desires relative to one another.

For simplicity, the most preferred outcome will be assigned an intensity of 10, and the least

preferred a 0 (so all the other outcomes will fall somewhere in between).

<.2>

<.3>

<.5>

(10)

(0)

(4) (4)

(10)

(0) (0)

(4)

(10)

a1

a2 a3

a1

a2 a3

a1

a2

a3

beliefs

desires

Figure 2.3 Decision tree with uncertainty

In order to decide what to do, the agent cannot prune the decision tree as before, because

each of the states can occur with some probability. She must instead calculate the expected

utility associated with each action. The idea is a follows. Since the agent believes that s1 obtains

Normative Economics/Chapter 2 47

with a probability of .2, she can see that choosing a1 will give her a 20 per cent chance of getting

her most preferred outcome o1. It also gives her a 30 per cent change of getting mediocre

outcome o2, and a 50 percent chances of getting the worst outcome, o3. The total value associated

with this action can therefore be determined by adding up the value of each of these chances.

These can be calculated by multiplying the value of the outcome by the probability of getting it.

In the case of a1 this is (.2 * 10) + (.3 * 4) + (.5 * 0) = 3.2. This number is the expected utility of

a1. The expected utility of all the actions can be calculated in the same way:

U(a1) = (.2 * 10) + (.3 * 4) + (.5 * 0) = 3.2

U(a2) = (.2 * 0) + (.3 * 10) + (.5 * 4) = 5

U(a3) = (.2 * 4) + (.3 * 0) + (.5 * 10) = 5.8

Now, instead of simply selecting the action that produces the outcome that the agent most

desires, she selects the action with the highest expected utility. The agent is therefore said to

maximize expected utility. In this example, a3 is the best choice. This is not obvious without the

calculation. Although a3 gives the agent the best chance of getting her most preferred outcome

(50 per cent chance of getting o1), it also gives her a fairly high chance of getting her least

preferred outcome, o3. Action a2, on the other hand, minimizes the chances of getting the worst

outcome, and maximizes the chances of getting the second best. Action a3 winds up being better

because the agent likes the best outcome a lot more than the second best. If the value of o2 were

increased from 4 to 8, then a2 would be better than a3.

This example shows how the conception of practical rationality as utility-maximization

follows very naturally from the application of belief-desire motivational psychology to

probabilistic choice problems. However, some theorists have doubted whether it is reasonable to

represent beliefs and desires as having specific numerical probabilities or intensities. Obviously,

in day-to-day contexts we have only a vague idea of how convinced we are of something, or of

how much we want something. There is, however, a very easy procedure that we can use to fix

these levels. To get the general idea, consider a person who is offered three different kinds of

fruit to eat -- an apple, an orange, and a banana. He knows that he likes apples the best, and

prefers oranges to bananas, but is not sure how much he likes one over the other. But it is easy to

Normative Economics/Chapter 2 48

find out, using the following procedure. Assign "apple" a value of 10. Now cut a piece off the

apple (say 10 per cent of it), and offer him a choice between what is left of the apple and the

entire orange. If he chooses what is left of the apple, cut off another piece, and repeat the offer.

Eventually, so little of the apple will be left that he will begin to prefer the orange. The value of

the orange is therefore equal to the portion of the apple that is left at precisely the point at which

his preference switches. (So if cutting off 30 per cent of the apple makes him indifferent between

the apple and the orange, then an orange is worth .7 apples to him. Multiplying this number by

the value of the apple yields the intensity of his desire for an orange.) The same procedure can be

repeated for the banana. In the end, the value of each piece of fruit is expressed as a fraction of

the most desired piece.

Clearly, this procedure works because one of the outcomes is perfectly divisible. We can

cut as many pieces off the apple as we like. In order to apply the same general idea to any set of

outcomes, a more abstract procedure must be devised. To do this, all one has to do is offer the

person a set of lotteries that give her a greater or lesser chance of "winning" the best outcome.

Again, assign "apple" a value of 10, and "nothing" a value of 0. Oranges and bananas will be

somewhere in between. Now offer her a gamble that gives her a 90 per cent chance of getting the

apple, and a ten per cent chance of getting nothing. (This is called a "lottery over the extremes,"

as it gives her some chance of getting either her best or worst outcome.) If she prefers this lottery

to the orange, offer her a new lottery that gives her a lower chance of getting the apple.

Eventually the chances of getting the apple will be so slim that she will begin to prefer the

orange. The value of the orange can therefore be set equal to the chance of getting the apple in

the lottery at which her preference switches. (So if she is indifferent between the orange and a

lottery that gives her a 70 per cent chance of getting the apple, then an orange is worth .7 apples

to her. Multiplying this number by the value of the apple yields the intensity of her desire for an

orange.)

The advantage of this second procedure is that is can be applied to anything, since a

hypothetical lottery can be constructed for any outcome. This provides good reason to think that

desires can always be represented as having a certain intensity level. The other advantage to this

lottery procedure is that it is formally analogous to the procedure used to fix beliefs. A standard

strategy for determining people's level of conviction is to see how much they are willing to bet

Normative Economics/Chapter 2 49

upon the outcome of events. For instance, if someone is quite convinced that p, and he is offered

a choice between $10 for sure, or $20 if p is true, then he should prefer the conditional offer. But

if the value of the sure-thing offer is slowly increased, eventually there will come a point at

which he becomes indifferent between that offer and the gamble. This will reveal the probability

that he assign to p. (For example, if he is "90 per cent sure" that p, then he will prefer the sure-

thing offer once it gets higher than $18*)

There are three things to keep in mind about this definition of utility:

• It is referred to as expected utility because it represents only the expected value of a

particular choice, i.e. the value ex ante, or before it is known how things turn out. Once

the action is performed, the agent will discover which state actually obtains, and so will

discover what actual utility she receives. Thus the expected utility of an action is

distinguished from its payoff, which is the value that it has for the agent ex post. In the

example above, action a3 may have an expected utility of 5.8, but it will have a payoff of

either 10, 4 or 0, depending upon which state obtains.

• It is important not to think of utility as some kind of distinct psychological state that

agents seek to maximize. The "utility" that an agent derives from an action is just a

numerical short-hand for expressing the way that the outcomes satisfies one or more of

her desires. There is no reason to think that these desires have anything in common. The

agent's utility-maximizing course of action may be to donate money to famine relief, but

this does not mean that she donates to famine relief in order to maximize her utility. She

donates to famine relief because she wants to alleviate the suffering of others, and

donating to famine relief is the best way of doing this. The fact that it is the "best" way of

doing this is what is indicated by the fact that it is utility-maximizing. Nevertheless, our

way of speaking can easily lead to thinking that there is some particular thing -- called

utility, pleasure or happiness -- that all actions are intended to achieve. This is referred to

* This assumes that the agent is risk-neutral. One of the advantages of using lotteries to ascertain the intensities of the

person's desires is that it builds the agent's attitude toward risk right into the utility function, allowing the theorist to

treat all agents as if they were risk-neutral in subsequent analysis.

Normative Economics/Chapter 2 50

as the fallacy of misplaced concreteness – just because a word is used referentially, it

does not follow that there is something out in the world to which it corresponds.

• It is very important to remember that the numbers used to represent the agent's utility are

arbitrary. For any given utility function for an agent, one can construct a notational

variation on it by multiplying it by any positive number and/or adding any positive

number to it. More specifically, any positive linear transformation (U'(a1) = xU(a1) + y: x

> 0) of an agent's utility function yields an equivalent representation of that agent's

desires. This follow from the fact that the numbers assigned to the extremes (the best and

worst outcome) are arbitrary. In the above example, instead of assigning 10 to apple and 0

to nothing, we could have assigned 50 to apple and 10 to nothing, and it would have

made no difference in the calculation of which action the agent should perform. The

utility function is constructed by translating the agent's desire for an orange or for a

banana into a desire for a specific fraction of an apple. Once all desires are expressed in

terms of apple-fractions, then it is possible to balance them against one another. However,

instead of performing calculations like (.3apple + .12apple), it is easier to just assign

"apple" some numerical value, and forget about the specific content of the desire that is

being used as numeraire. The danger in this strategy is that it can mislead one into

thinking that, once this number has been assigned, it is because the intensity of the agent's

desire for an apple has also been determined. This is not the case. Two agents could have

exactly the same utility function for fruit, yet could hold these desire with vastly different

intensities. I may like apples more than oranges and bananas, but not like fruit in general,

while my friend is wild about it. We may therefore each get utility of 7 from oranges, but

this says nothing about how much happiness either of us get from oranges in the grand

scheme of things. As a result, this type of utility measure does not yield meaningful

interpersonal comparisons. One person can have a longer life-expectancy, or a longer

life, than someone else, but no one can have more expected utility, or a larger utility

payoff, than anyone else. It is therefore very important to resist the idea that someone

with a greater numerical utility payoff is in any way "happier" than anyone else. It may be

Normative Economics/Chapter 2 51

possible to make these sorts of comparisons somehow, but the notion of utility, as defined

in decision theory, cannot be used to perform them.

2.2 Equilibrium

Decision theory provides an extremely simple account of the way that we make choices,

based on a set of quite minimal psychological assumptions. It does, however, have some

extremely surprising consequences. Most of these arise when it is applied to social interaction.

Social interaction can be handled, at least initially, as a straightforward generalization of

the decision-theoretic model. Suppose that now, instead of wanting go to one of the local cafés

for a cup of coffee, I want to go because I am hoping to bump into an acquaintance of mine. My

acquaintance has the same desire to bump into me. In this case, the action and the outcome can

be defined in the same way, but the state is a little different. In order to succeed in meeting my

friend at café x, we both must decide to go there. This means that my friend's action is one

component of the state for me, just as my action is one component of the state for her. The

interaction can modeled as above, but with the two agents are now "playing" against each other,

rather than against nature. This kind of interaction is referred to generically as a game. A typical

game is shown in Figure 2.4. Here player 1's actions are labeled {X,Y}, and player 2's {x,y}. The

dashed line between player 2's nodes shows, as before, that she does not know which node she is

at when called upon to move. The information set is used to model a situation in which both

players move simultaneously, or are not able to observe each others' actions. In this case, it

shows that we must both pick a café, not knowing which one the other has picked.

Normative Economics/Chapter 2 52

o1

o2

x

y

player 1

player 2

o3

o4

x

y

X

Y

Figure 2.4 Game tree diagram

This kind of game tree diagram can be used to model interactions like the café-

coordination problem. Suppose that meeting each other has a utility of 1 for each of us, while

failing to meet is worth nothing. We each have two options "go to café x," and "go to café y."

The interaction can then be modeled as in Figure 2.5. The payoffs are shown, as before, between

parentheses, but now there are two numbers – the first gives player 1’s payoff for that outcome,

the second gives player 2’s payoff.

Normative Economics/Chapter 2 53

(1,1)

(0,0)

x

y

(0,0)

(1,1)

x

y

X

Y

1 2

Figure 2.5 Coordination game

In order to solve this problem, both players must, as before, determine the probability that

a particular action will yield a particular outcome. But now, instead of trying to ascertain the

probability of a fixed state of nature, player 2 must try to anticipate what action player 1 will

perform. Specifically, she must determine whether player 1 will select X or Y (i.e. whether she

will be at the top or the bottom node of her information set) before she can decide whether to

pick x or y. It is clear from the diagram, however, that player 1 is in a perfectly symmetric

position. In order to decide whether to select X or Y, he must know whether player 2 will choose

x or y. Unfortunately, since neither of them is able to make up his or her mind what to do before

knowing what the other intends to do, the problem cannot be solved in any straightforward way.

And as everyone knows from experience, it is very easy in situations like this to goof up and miss

each other.

An observer looking at the café-coordination game could reasonably predict that each

outcome is equally probable. However, there is one thing to notice about the interaction. The

(X,x) outcome and the (Y,y) outcome have one advantage over the (X,y) and the (Y,x), viz. that

if both players know what the other one intends to do, it does not give either of them an incentive

to change their strategy. This makes these outcomes, in a sense, self-enforcing. The other

outcomes, in which the two players miss each other, provide each player with an incentive to

change. For instance, if I go to one café looking to bump into my acquaintance, but fail to find

her, I will probably not keep returning to that same café. Next time, I will try a new one. Of

Normative Economics/Chapter 2 54

course, this can lead to a comedy of errors, in which we both switch back and forth, and keep

missing each other. The point is simply that as long as we fail to coordinate, one of us at least has

an incentive to change his or her behaviour. But as soon as we hit upon a solution, i.e. once we

wind up in the same café, then neither us has an incentive to change. Once we succeed in

meeting, that outcome will tend to "stick," so that we will return to that same café if we want to

bump into one another again.

In game theory, outcomes that are self-enforcing in this way are referred to as equilibria.

The most basic kind of equilibrium is referred to as Nash equilibrium – after the mathematician

John Nash – and is defined as follows (where a "best response" is the action that maximizes

expected utility):

Nash equilibrium: A set of actions is in equilibrium if each player's strategy is a best

response to that of the other players.

This definition simply captures the idea that the outcome is self-enforcing. If each player's

strategy maximizes expected utility, given the strategies of the others, then no one has an

incentive to change strategies.

The idea of an equilibrium is very important in analyzing social interactions. Of course,

the fact that there is an equilibrium available to players does not mean that they will

automatically choose it, and so the identification of an equilibrium does not translate

automatically into a prediction of their behaviour (in the way that, for instance, the laws of

physics allows us to predict the behaviour of objects). People can always make mistakes, and

there is no guarantee that, even if fully rational, they will all hit upon the same equilibrium. (In

the café coordination game there are two equilibria, and no real basis for choosing one over the

other.) What equilibrium analysis reveals is rather the tendency that social states will have to

either change or remain stable.

This allows us to formulate the question about the Pareto-optimality of social states more

precisely. The question that began this chapter was whether or not individual utility-

maximization would lead to Pareto-optimal outcomes. More precisely put, the question is

whether the strategic equilibria of “games” (or social interactions) should be Pareto-optimal.

Normative Economics/Chapter 2 55

Looking at the definition of Nash equilibrium, it might appear that it should be so. After all, if

everyone's strategy is a best response to everyone else's, then shouldn't everyone be in a position

where their condition cannot be improved without worsening someone else's? The answer is no.

To see why, consider the game-tree diagram in Figure 2.6, along with the graph showing the

available payoffs.

(0,3)

c

d

(3,0)

(1,1)

c

d

C

D

1 2

(2,2)

(0,3)

(3,0)

(1,1)

(2,2)

payoff to 1

payoff to 2

Nash equilibrium

A. Extensive form game B. Graph of payoffs

Figure 2.6 Prisoner's dilemma

This is an example of a game called a "prisoner's dilemma" (or PD), named after a

famous story used to illustrate it. The only equilibrium of the game is {D,d}, which provides a

payoff of (1,1). This outcome is strictly Pareto-inferior to (2,2), but {C,c} is not an equilibrium.

To see why, consider player 2's decision. If she is on the top node of her information set, she has

a choice between playing c, and getting a payoff of 2, or playing d, and getting a payoff of 3.

Clearly d is superior. If she is on the bottom node of her information set, then she has a choice

between playing c and getting 0, or playing d and getting 1. So again d is better. This means that

no matter what player 1 does, player 2 is better off playing d. Player 1 can see this, and so has a

choice between choosing C, and getting 0, or choosing D and getting 1. So he chooses D. (Of

course, even if he didn’t expect player 2 to play d, it would still be in his interest to play D – the

Normative Economics/Chapter 2 56

incentives in the game are symmetric.) Player 2 also chooses d, and they get the (1,1) outcome.

{C,c} will always be unstable, because C is not a best response to c (D is), and c is not a best

response to C (d is). Thus both players have an incentive to switch strategies away from {C,c},

but when they both do this, it gives them the Pareto-inferior payoff associated with {D,d}.

This may seem kind of abstract, and so it is usual to tell a story to illustrate how each

player's reasoning might go. In the standard story, two criminal suspects are arrested on a

somewhat minor charge (carrying a 2-year sentence). However, the police believe that the

suspects have also been involved in a more serious offence (carrying a 4-year sentence). They

have enough evidence to convict both on the minor charge, but they will not be able to get a

conviction on the major charge unless one of the suspects agrees to testify against the other. So

the police make them each the following offer: refuse to confess and get convicted on the minor

charge; testify against your accomplice and have the minor charge dismissed. This means that

each player is looking at the options shown in Figure 2.7 (This is an alternative way of showing a

game, referred to as the normal form. Player 1 chooses a row, Player 2 a column. The payoffs are

shown in the corresponding box, in the usual order – although here they show the amount time

spent in jail, i.e. disutility not utility.)

Player 2 Refuse Testify

Refuse

Player 1

Testify

(2,2)

(0,6)

(6,0)

(4,4)

Figure 2.7 Length of jail sentence

Consider the problem from Player 1's perspective. His partner is either going to testify or

refuse. If his partner is going to refuse, then he has a choice of either refusing as well, and

Normative Economics/Chapter 2 57

serving 2 years on the minor charge, or else testifying, and so getting off with no jail time at all.

So testifying is clearly better. However, if his partner is going to testify, then he has a choice of

either going to jail for 6 years on both the major and the minor charge, or else testifying in return

and cutting this down to 4 years by eliminating the minor charge. Either way, he is better off

testifying (this can be determined by inspection in Figure 2.7, where player 1's jail time is lower

in the "Testify" row than in the "Refuse" row in both columns). But since player 2 is in exactly

the same situation as player 1, both of them are better off testifying, and so both of them go to jail

for 4 years on the major charge. If they had kept their mouths shut, they would only have served

2 years, and so the outcome is – for them – Pareto-suboptimal. (It may be better for the rest of

society, but not necessarily so. It is in the interest of both players to testify that the other

committed the major crime, regardless of whether either of them actually committed it.)

If this example seems a bit fanciful, we can return to the story of Bill and Ted fishing on

the deserted island. Suppose each of them has the option of setting either one trap and then

poaching from the other, or setting three traps and not poaching. But setting up more than one

trap makes it impossible to supervise the traps, and so easy for the other to poach. If each trap

catches one fish, then the number of fish they each can expect to catch is shown in Figure 2.8

below:

Player 2 Don't Poach Poach

Don't Poach

Player 1

Poach

(3,3)

(4,0)

(0,4)

(1,1)

Figure 2.8 Poaching

This diagram presents in a more precise fashion the informal analysis presented at the

beginning of this chapter. If Ted is going to poach, then Bill is better off only setting up one trap

Normative Economics/Chapter 2 58

and poaching as well. But if Ted isn't going to poach, Bill is still better off setting up only one

trap. So both of them set up only one trap, and they each get one fish instead of three. This seems

like a completely irrational state of affairs – they have the technology, the time and the ability to

catch as many as 6 fish, but instead they only catch 2. The rest of their time is wasted, as they sit

around all day supervising their traps instead of catching more fish. But they cannot reach the

Pareto-optimum, because it is not an equilibrium. Even if they realize that they are in this

situation, it will not change their behaviour. If either one started setting up multiple traps one

day, it just gives the other an incentive to start poaching again, and so it would be just a matter of

time before the new arrangement unraveled. The equilibrium, however, despite the fact that it is

worse for both of them, has no similar tendency to unravel. Thus Bill and Ted are caught in

exactly the same situation as the suspects after whom the famous prisoner's dilemma was named.

2.3 The war of all against all

Many people have found these prisoner's dilemma situations to be paradoxical. It seems

that, in such cases, the maximizing (or instrumental) conception of practical rationality

recommends that agents engage in manifestly irrational behaviour. This has led some people to

think that there must be something wrong with this conception of rationality. However, it is not

clear in what sense the outcome that the two parties achieve in a PD is irrational. It would be less

question-begging to say that the two players in a prisoner's dilemma (or Bill and Ted in their

fishing struggle), are engaging in collectively self-defeating behaviour. The inability to achieve a

Pareto-optimum through strategic interaction can be referred to as a collective action problem.

The reason that we should not jump hastily to the conclusion that there is something

wrong with the theory of rationality that has these consequences is that people do in fact very

often find themselves caught in collective action problems. In fact, it is nearly impossible to drive

around in any large city for more than ten minutes without getting into one. Consider a few

classic examples:

● Noisy Neighbours: Anyone who has lived in an apartment building knows what it is like to

listen to a neighbour's music through the wall. Many people respond to the problem by putting on

some music of their own, in order to drown out the offensive noise coming from next door. This

Normative Economics/Chapter 2 59

can generates a typical prisoner's dilemma. Suppose Mary puts on some music at fairly high

volume. This irritates her neighbour Sue, who responds by putting on some music of her own.

Note that Sue would rather not listen to any music at all, but if she does have to listen to some,

she would rather listen to her own that Mary's. Similarly, Mary can now hear some of Sue's

music filtering through the wall, which diminishes her listening enjoyment, but she can hardly

turn off her music now, since listening to just Sue's music would be even more irritating. So she

leaves it on. Now both women are listening to music, even though they would both rather sit in

silence. The payoffs can be represented as follows:

Sue Silence Music

Silence

Mary

Music

(2,2)

(3,0)

(0,3)

(1,1)

Figure 2.9 Dueling music

Figure 2.9 models a one-shot Prisoner's Dilemma, since it represents only a single

interaction. However, we can easily see how a situation like this can escalate. If Sue responds to

Mary's music by attempting to play her own music louder, then Mary may respond by turning up

the volume on her own stereo. If Sue responds in kind, then Mary will have little choice but to

escalate things further. Eventually both may wind up listening to music at a volume far above the

comfort level. This sort of dynamic, in which an interaction generates increasingly suboptimal

outcomes, is known as a race to the bottom. An "arms race" is a typical example of an

interaction with this structure.

Normative Economics/Chapter 2 60

● Gridlock: Gridlock is a collective action problem that arises when streets are heavily

congested. Cars enter an intersection on a green light, but because of congestion ahead, are

unable to clear the intersection. As a result, when the light changes, cars travelling on the other

road are unable to get through, causing congestion that locks up the intersection behind them. If

this happens two more times, the intersections on all four corners of a block can get locked up.

This produces gridlock, shown schematically in Figure 2.9. The amazing thing about gridlock is

that it can last forever, since the four obstructions are mutually reinforcing. Once it starts, it also

spreads very easily to neighbouring blocks, at one point locking up miles and miles of roadway in

cities like New York and Los Angeles (which is why these cities introduced stiff fines for

motorists caught in an intersection when the light changes).

gridlocked

Figure 2.10 Gridlock

Gridlock is clearly a suboptimal outcome. The reason it is so common is that the

interaction has the structure of a prisoner's dilemma. Each driver has a choice of either entering

the intersection as soon as possible, or else waiting until the intersection can be cleared before

entering. The danger is that if you don't enter the intersection right away, the cars going the other

Normative Economics/Chapter 2 61

way may lock you out. The interaction can be thought of as a game between North-South drivers

and East-West drivers. Assume that the lights cycle every minute. The choice facing drivers is

then as depicted in Figure 2.10. The story is roughly as follows: if the cars going the other way

are locking you out, then you are better off entering right away in order to cut down the number

of lights you have to wait by one. But if they aren't locking you out, then you are still better off

entering right away, because it still cuts down the number of lights you have to wait by one. In

other words, no matter what everyone else is doing, you are always better off entering rather than

waiting. So everyone will enter, and the result will be gridlock.

East-West Wait Enter

Wait

North-South

Enter

(1,1)

(0,3)

(3,0)

(2,2)

Figure 2.11 Enter or wait?

The problem is that waiting to enter an intersection until one is certain that it can be

cleared is strictly a courtesy to other drivers, it generates no benefit for the person who does it.

To recall some of the terminology used earlier, waiting to enter generates a positive externality

-- a small benefit for other drivers. Entering right away, one the other hand, only slows down

other drivers, not you. The costs of the action therefore take the form of a negative externality.

The interaction has a suboptimal outcome because individuals can speed up their own travel only

by generating the negative externality. Furthermore, they receive the positive externality from

others regardless of whether or not they reciprocate. This means that no one has any incentive to

Normative Economics/Chapter 2 62

produce the positive externality, everyone has an incentive to produce the negative one, and so

everyone is worse off, because everyone is harmed by everyone else's actions.

● Merging: Traffic on an expressway usually slows to a crawl whenever there is a lane reduction

that forces cars to merge. However, the volume of traffic on either side of the merge may be the

same, and so the slowdown cannot be explained as a result of the reduced capacity of the road. It

is the merge itself that slows down the traffic. This is because the merge can create a collective

action problem. On an open road, an enormous amount of lane-changing can occur without

disrupting traffic flow at all. This is because drivers wait until there is a "space," which will

allow them to move into an adjacent lane without slowing anyone. The problem with lane

reductions is that drivers try to stay in the lane that is ending as long as possible, and so pass over

"spaces" that would allow them to switch lanes without disrupting traffic flow. But staying in the

lane until it ends reduces the chances that they will find a space, and leads to a slowing of traffic

in both lanes. This is the suboptimal outcome.

merge later merge now

Figure 2.12 Waiting to Merge

Again, it can be seen that this interaction has the structure of a PD. For a driver in the

lane that is ending, the interaction can be thought of as a game between her and the driver behind

her. She has two options, "merge now", and "merge later." The problem is that the decision to

merge later, when there may not be a good space, slows down every car behind her, but may not

actually slow her down. So she has no particular incentive to merge right away. Furthermore, if

Normative Economics/Chapter 2 63

she does choose to merge, it frees up her lane, so that the driver behind her may pass her, and

then merge ahead of her, thereby slowing her down. Thus merging when you have an opportunity

is like waiting to enter an intersection – it benefits only other drivers, and can easily harm you if

others take advantage of the opportunity it provides to them.

● Rubber-necking: Here is another traffic example, which I will simply cite from the economist

Thomas Schelling:

A strange phenomenon on Boston's Southeast Expressway is reported by a traffic

helicopter. If a freak accident, or a severe one, occurs in the southbound lane in the

morning, it slows the northbound rush hour traffic more than on the side where the

obstruction occurs. People slow down to enjoy a look at the wreckage on the other side of

the divider. Curiosity has the same effect as a bottleneck. Even the driver who, when he

arrives at the site, is ten minutes behind schedule is likely to feel that he's paid the price

of admission and, though the highway is at last clear in front of him, will not resume

speed until he's had a good look, too.

Eventually large numbers of commuters have to spend an extra ten minutes

driving for a ten-second look. (Ironically, the wreckage may have been cleared away, but

they spend their ten seconds looking for it, induced by the people ahead of them who

seemed to be looking at something.) What kind of a bargain is it? A few of them, offered

a speedy bypass, might have stayed in line out of curiosity; most of them, after years of

driving, know that when they get there what they're likely to see is worth about ten

seconds' driving time. When they get to the scene, the ten minutes' delay is a sunk cost;

their own sightseeing costs them only the ten seconds. It also costs ten seconds apiece to

the three score motorists crawling along behind them. Everybody pays his ten minutes

and gets his look. But he pays ten seconds for his own look and nine minutes, fifty

seconds for the curiosity of the drivers ahead of him.

It is a bad bargain.

More correctly, it is a bad result because there is no bargain. As a collective body,

the drivers might overwhelmingly vote to maintain speed, each foregoing a ten-second

Normative Economics/Chapter 2 64

look and each saving himself ten minutes on the freeway. Unorganized, they are at the

mercy of a decentralized accounting system according to which no driver suffers the

losses that he imposes on the people behind him (1978, 125).

Again the problem is that the costs associated with slowing down are all externalized, i.e.

passed on to drivers to the rear. Drivers keep generating these costs for each other, because once

they have reached the scene of the accident, they have no incentive not to. As a result, each driver

does worse than if everyone refrained from slowing down. It is a suboptimal outcome.

● Double doors. Finally, an extremely minor example that illustrates how ubiquitous collective

action problems can be. Whenever people are trying to leave a crowded theatre, a bottleneck

inevitably develops at the exit doors. Often, however, despite the bottleneck, some of the doors

will remain shut and unused, even though they are not locked. This is usually because by the time

that anyone is close enough to the doors to try the closed ones it is easier just to go out the open

ones. Opening a new door would generate a positive externality for the people behind, but doesn't

speed things up for the person exiting. As a result, the crowd is caught in a collective action

problem. This is why many theatres hire ushers to open all the doors. (Similarly, huge traffic

backups can be caused by a piece of debris on the road that could easily be moved aside.

Unfortunately, by the time anyone gets close enough to the debris to remove it, the road ahead of

them is clear [Schelling 1978, 125-6].)

These examples help to illustrate why the idea of a collective action problem is an

extremely powerful tool of social analysis. It shows how a particular structure of incentives can

lead people to get "stuck" in suboptimal outcomes, even though the overall behaviour pattern is

one that no one would recommend upon reflection. It also shows how everyone can be aware that

a particular interaction pattern is problematic, and yet not do anything to change it.

Of course, it is something of an exaggeration to say that people get "stuck" in suboptimal

outcomes. As Schelling's reference to voting illustrates, there are a number of devices that people

can use to restructure their interaction patterns in order to avoid these outcomes. The reason that

so many collective action problems show up in traffic flow in that when people are in their cars,

Normative Economics/Chapter 2 65

most of the devices that they would normally use to manage their interactions are unavailable. In

particular, the fact that people cannot talk to each other, and that the interactions are anonymous,

significantly limits the extent to which individuals can be held accountable for their behaviour. In

traffic, almost the only effective form of regulation is the fines and punishments associated with

traffic law enforcement.

What the example of traffic shows, however, is that without the countervailing force of

social regulation, social interaction has a natural tendency to generate collective action problems.

If motorists were left to their own devices, it would simply be impossible to drive anywhere.

Every intersection, for instance, is a potential collective action problem. Instead of waiting their

turn, motorists can always "follow-through" after the car in front of them without much danger.

Without some form of regulation – a traffic light, or a right-of-way convention – enforced

through either external sanctions, or internal motives of courtesy and consideration, busy

intersections would quickly become impassable from one direction. Anyone who doubts this

should consider the fact that the city of New York fines motorists $500 for getting caught in

certain intersections during a red light. This shows that people must be given very powerful

incentives to get them out of collective action problems, despite the fact that what they are being

"forced" to do makes them all better off in the end.

Hobbes was the first social theorist to notice how common collective action problems

would be in the absence of regulation. In thinking about social organization, Hobbes asks us to

imagine a situation in which, initially, all regulation is absent. In this condition, which he refers

to as the state of nature, individuals act in exactly the way that the maximizing conception of

rationality, outlined in the previous section, says that they do. He then goes on to consider a

variety of different activities – construction, agriculture, navigation, scientific research – and asks

whether they would occur in this state. His answer is a resounding no. All of these activities

would be undermined by collective action problems:

Whatsoever therefore is consequent to a time of war, where every man is enemy to every

man; the same is consequent to the time, wherein men live without other security, than

what their own strength, and their own invention shall furnish them withal. In such

condition, there is no place for industry; because the fruit thereof is uncertain: and

Normative Economics/Chapter 2 66

consequently no culture of the earth; no navigation, nor use of the commodities that may

be imported by sea; no commodious building; no instruments of moving, and removing

such things as require much force; no knowledge of the face of the earth; no account of

time; no arts; no letters; no society; and which is worst of all, continual fear, and danger

of violent death; and the life of man, solitary, poor nasty, brutish, and short (1651, 89).

What Hobbes noticed is that all of the activities listed above require cooperation and

trust. Any interaction that involves cooperation or trust is not a strategic equilibrium, and so

cannot be sustained without some kind of regulatory apparatus. Since this is precisely what is

absent in the state of nature, none of these activities would occur. (Note that Hobbes is often

misunderstood on the point. Some people have taken him to be claiming that people are evil or

bad by nature, or that their preferences are antisocial. This is not the case. Hobbes does not claim

that people want bad things. He claims that the way people pursue their goals precludes the

possibility of stable cooperation.)

Take for instance the case of agriculture. The problem here is the same as the one with

Bill and Ted and their fish traps. People have a choice between planting their own crops, or

taking produce from others. Since it is impossible to supervise crops at all times, it is impossible

for the person who has invested the time and labour in the soil to stop people from taking his

produce. In general, it will always be easier to take other's people's food than to cultivate one's

own. This is especially true if there is any danger that others will take what one has produced.

And as long as people prefer leisure to labour, this will be the case for any complex or long-term

economic activity. But this means that people simply will not engage in these activities in the

state of nature, or if they do, it will be at an inefficiently low level.

In discussing these cases, it is helpful to introduce some of the informal terminology that

has developed to describe the various strategies and outcomes. With Bill and Ted, in the situation

described, setting up a fish trap amounts to producing a social good. It is a social good because,

once the trap is set, anyone can come along and take the fish out of it. Planting a field of corn in a

Hobbesian state of nature is very much the same thing. Even though the farmer can try to protect

it by day, the moment she falls asleep it becomes a pure social good, as people can come along

and take as much as they want. Someone who consumes a social good, but does not produce any

Normative Economics/Chapter 2 67

of it (or produces less than he consumes), is referred to as a free-rider (e.g. by poaching Ted's

traps, Bill is getting a free ride off of Ted's labour). The opposite of a free rider is a sucker. This

is someone who produces a social good, but is then unable to consume it because someone else is

free riding. In such cases, the free-rider is said to exploit the sucker. In a collective action

problem, both players want to free ride, and neither wants to be a sucker. As a result, neither of

them produces the social good, and they both wind up worse off than they would be if they had

both chosen to produce it. For example, as Bill and Ted cut back on the number of traps they set,

in order to avoid being suckered, they also cut back the amount of fish they can catch. This

makes them poorer and poorer. But rather than making them reconsider their actions, all it does

is make them more and more determined to pursue this ultimately self-destructive pattern of

activity.

Ted Don't Poach Poach

Don't Poach Bill Poach

(3,3)

(4,0)

(0,4)

(1,1)

Bill: sucker Ted: free-rider

Bill: free-rider Ted: sucker

Both free-ride: race to the bottom

Social good produced by both

Figure 2.13 Collectively self-defeating behaviour

A race to the bottom seems so stupid that many people have a hard time believing that

there is any serious likelihood that they will occur. This can lead to an extremely dangerous

complacency. Suboptimal equilibria are extremely common in society, just as they are in nature

generally. One of the things that Hobbes realized is that even if individuals are not actively trying

to free ride off one another, they may have to adopt non-cooperative strategies just to avoid being

exploited. For example, consider the case of standing up in order to get a better view at a sporting

Normative Economics/Chapter 2 68

event. This action has a free-rider structure. If the person in front of you remains seated, then

standing up give you a better view. But standing up completely obscures the view of the people

behind you, and so they have to stand up just to stay even. If too many people stand up, then the

rest do not have much choice but to do so as well – otherwise they won't be able to see anything.

Soon a new equilibrium is reached in which everyone is standing up, even though none of them

now have a better view than they would if everyone sat back down again. What this shows is that

even if you do not actively try to free-ride off your neighbours, you may have to act like those

who do in order to avoid being suckered by them. The resulting outcome will be suboptimal, and

everyone will be able to point the finger of blame at someone else.

Hobbes articulated this idea in his discussion of preemptive strikes (or "invading for

safety" [1651, 88]). He observed that people may defect, not because they hope to exploit others,

but simply to avoid getting suckered. This is what motivates his striking conclusion that the state

of nature would be a "war of all against all." Unregulated social interaction can actually make

people worse off than they would be if they were all alone. For instance, both Bill and Ted are

better off when they are alone on their own islands. As soon as they get together on a single

island, it sets off a race to the bottom that makes their condition much worse than it was before.

The lesson to be learned is simple: efficient individual action has no tendency whatsoever to

produce Pareto-efficient social outcomes. As Schelling put it, "Things don't work out optimally

for a simple reason: there is no reason why they should. There is no mechanism that attunes

individual responses to some collective accomplishment" (1978, 32).

2.4 Further Examples

Some people find this diagnosis of our "natural" tendencies to be overly pessimistic and

narrow. Surely, they argue, people are not so short-sighted and anti-social. How realistic is it to

think that people will get stuck in these collective action problems?

There is something to be said for this objection. It is no doubt true that most of the people

that we meet on a day-to-day basis are fairly cooperative. However, the people that we meet are,

for the most part, fully socialized adults who have already internalized a set of moral constraints

that limit their tendency to engage in straightforwardly utility-maximizing behaviour. They are

also acting in the context of social institutions in which they are punished, both formally and

Normative Economics/Chapter 2 69

informally, for engaging in uncooperative and anti-social behaviour. Anyone who has dealt with

children knows that, while they have spontaneously altruistic sentiments, they are not in general

disposed to cooperate and share with one another, nor do they have a natural desire to obey the

rules that their parents set out for them. This is something that we spend years teaching them, and

so it is, in a sense, "artificial." The way that these internal controls are built in through

socialization processes will be the subject of the next chapter. For the moment, I would like to

give a few more examples of collective action problems, in order to show how easy they are to

fall into in the absence of such controls.

● The Peacock's tail. It is worth noting that suboptimal equilibria are not something that is

unique to human social interaction. Biological evolution also generates equilibria. Species "stop

evolving" when the benefits that flow from any mutation – in the form of increased reproductive

fitness – are outweighed by the costs. Usually this is because the species has discovered an

"ecological niche" – a habitat or role that they are especially good at exploiting – that any

significant change would take them out of. This can make the costs associated with mutation

prohibitively high over long periods of time, and therefore place the species' form in evolutionary

equilibrium. It was initially thought that competition would lead to the "survival of the fittest," so

that each species would be optimally equipped for survival in its own environment. Despite the

power that this idea exercised over the imagination of scientists and laypeople alike, it is

obviously false. Evolutionary selection often generates a "race to the bottom." The classic

example is the peacock's tail. Male peacocks are literally handicapped by the size of their tails.

Why would such a feature arise? Through a process known as sexual selection – because the

larger the male's peacock's tail, the more likely it is the female peacocks will mate with him. This

adaptation in the female may have originally served some useful purpose, but it led to the

perverse consequence that a male peacock could gain increased reproductive fitness by having –

not simply a large tail – but a tail larger than any other male's. This in turn "encouraged"

mutations that increased the size of the tail. The advantage conferred upon the large-tailed male

in increased access to females outweighed the disadvantages stemming from the increased

encumbrance. The problem is that the advantages obtained were all against members of his own

species.

Normative Economics/Chapter 2 70

Consider the diagram in Figure 2.13, which illustrates the hypothetical outcome of a

competition between two male peacocks for two females. The payoffs are given as the number of

offspring who survive to maturity. If both males have the same size tails, then they will each

mate with one female. However, if one has a larger tail than the other, he will mate with both

females. As a result, even though the offspring of the "big tailed" peacocks are less fit, in the

sense that fewer survive to maturity, a mutation that increases the size of one peacock's tail will

rapidly spread to the entire population. This will happen again and again and again, until the tail

is so large that the benefits of beating other males in the population are finally outweighed by

overall liability that the tail confers, in terms of a reduction of average fitness. In this context,

having a small tail generates a "social good." It is better for average fitness, but leaves the

individual open to being "suckered."

Peacock 2 Small Tail Big Tail

Small Tail

Peacock 1

Big Tail

(10,10)

(12,0)

(0,12)

(6,6)

Figure 2.14 Number of offspring who survive to maturity

The obvious conclusion is that nature is not an optimizing system. Biologists refer to

sequences of escalating adaptations as evolutionary arms races. Evidence of these races is all

around. For instance, in the competition among plants for sunlight, it is to advantage of any one

plant to be slightly taller than all of the rest. So like the peacock's tail, plants get taller and taller,

until the disadvantages that come from being tall eventually outweigh the advantages that come

from being a bit taller than surrounding plants. There is no intrinsic value in being a tall plant,

Normative Economics/Chapter 2 71

and there are considerable disadvantages. Growing taller is therefore a beggar-thy-neighbour

evolutionary strategy. When these arms races occur between species, there is a sense in which the

"fitness" of each species against its environment is always increasing. But when there is an

intraspecies arms race, as in the case of the peacock's tail, or of a plant that competes against

other members of the same species for sunlight, it is possible for this competition to reduce the

fitness of the species against its environment. And it is possible for a species to make itself so

vulnerable through escalating adaptations of this type that it effectively drive itself into

extinction.

● Genetic Engineering: One of the reasons that people worry about genetic engineering is that it

creates the possibility that we might unintentionally create our own version of a biological "arms

race." One of the blessings of biological evolution is that it is extremely slow – not so with

cultural evolution and social change. Suppose it suddenly became possible to choose a variety of

physical characteristics of our children. In some cases this would not be a problem, as the

strategic equilibrium would remain optimal. But in some cases it clearly would not. For instance,

suppose that parents perceived there to be some advantage in having children who were slightly

taller than average, or slightly thinner than average (Schelling 1978, p. 204). This could easily

lead to an increase in the average height of the population, or a decrease in average body mass.

No one would get their actual wish, which was to have a child who was taller or thinner than

everyone else, but the basic form of the species would change as an unanticipated consequence

of their actions. This outcome could easily be suboptimal, in the sense that nobody would

actually want the average height or body mass of the species to change.

Some of this dynamic can already be seen with the popularity of plastic surgery in places

like California. Beauty is competitive, in the sense that beautiful people gain advantages over

others not by being good-looking in some absolute sense, but merely from being better-looking.

Plastic surgery as a strategy for upward social mobility is therefore a beggar-thy-neighbour

strategy. Augmenting one's own physical characteristics has the effect of bumping everyone else

down, making people with average physical characteristics into people with below-average

characteristics. This means that those who are being bumped down start having to get plastic

surgery just to retain position. But clearly a society in which everyone has plastic surgery is much

Normative Economics/Chapter 2 72

worse than a society in which no one does, since everyone winds up looking just as "old" or just

as "ugly" as everyone else in their reference class, it is just that now everyone is paying a lot

more to look that way. (Wearing makeup is of course a somewhat milder form of the same

problem.) The reason that social criticism of plastic surgery or cosmetics has no effect is simple:

it does not change the incentive structure of the underlying collective action problem.

● Actual Arms Races: The reference to biological "arms races" drew an analogy to a

comparable social phenomenon. The reason that weapons accumulation tends to escalate is that

the goal is not be armed per se, but rather to be more heavily armed than one's opponent. This

can create a situation in which opponents, in trying to secure advantage over one another, wind

up just increasing the deadliness of their conflict. This is illustrated by the following classic

movie scene – two guys are about to start punching each other, when one suddenly pulls out a

knife and says "a-ha." The other one then pulls out a knife as well. The first then says, "okay,

forget about that," and puts the knife back. The joke is that we all know the first guy pulled the

knife because he wanted to transform the fist vs. fist fight into a fist vs. knife fight, not a knife vs.

knife fight. The knife vs. knife fight is suboptimal, in the sense that neither side has increased its

chances of winning, but both are more likely to sustain serious injuries. But because it is the only

equilibrium, it is wishful thinking on the part of the first guy to think that he can go back to a fist

vs. fist fight.

In order to understand the dynamic of an arms race, all one has to do is imagine a few

more sequences being tacked on to the knife-pulling episode, e.g. after the second guy pulls a

knife, the first pulls a gun. The second also pulls a gun, after which the first pulls out a bigger

gun, and so on. Once can think of a state in which no one is armed as the cooperative outcome.

The first person to escalate is the free-rider. He takes advantage of the fact that no one else is

armed in order to gain an advantage over them. This puts everyone else in a position in which

they are vulnerable to exploitation from the free-rider, and so they respond by adopting the same

level of armament. But this just encourages someone else to free-ride by escalating further, and

for everyone else to respond as well. (Naturally, both sides accuse the other of being the free-

rider, i.e. the "aggressor." Thus in all military arms races, both parties claim to be developing

Normative Economics/Chapter 2 73

only defensive capability. But the difference is irrelevant, since both parties are reasoning in

exactly the same way.)

The same kind of incentive structure can be seen in the debates over gun control. Clearly,

a society in which no one has a gun is safer than one in which everyone does. Owning a gun

generates a negative externality for everyone around you, not only because you may attack them,

but also because there may be an accidental discharge, or someone may steal it, and so on.

However, having a gun can increase one's personal security, and so it is generally advantageous

to own one. This is true regardless of how many other people own guns as well. The incentive

structure is shown in Figure 2.14. The chances of being shot increase with the number of guns in

the population. However, the chances of being shot or assaulted are lower if you yourself have a

gun. This means that if everyone wants to reduce their risk of getting shot, then the only

equilibrium is one in which everyone has a gun. Thus in trying to reduce their chances of getting

shot, everyone winds up increasing it – a classic instance of collectively self-defeating behaviour.

percentage of population who own guns

those who don't own guns

those who own guns

chances of getting shot

Figure 2.15 Gun ownership

Normative Economics/Chapter 2 74

● The Tragedy of the Commons: In making his argument for private property, Locke claims

that one acre of "enclosed" land is more productive that over a hundred lying in common. What

he is referring to is the fact that collective use of resources can generate suboptimal outcomes.

Collective agriculture, for instance, generates a problem of "shirking." This is just a kind of free

riding. For example, if ten people are lifting a heavy object, it is impossible to tell whether

everyone is really lifting as hard as they can. Their effort makes such a small contribution to the

overall lifting effort that they can easily "slack off" without anyone noticing. All that a reduction

in individual effort means is that a small amount of weight is transferred onto the shoulders of

each other person. Now consider the case of ten people who are assigned to look after a field of

corn. Again, the amount of effort that each individual puts in is fairly small, and so no one is

likely to notice if he or she "slacks off" a bit. But given that everyone is in the same position,

everyone will slack off a bit, and so the field will be less productive than it could be.

A similar situation occurs with the consumption of resources. The classic example comes

from England, where land held in common was used to graze sheep. There was a certain

tendency for this land to be destroyed through overgrazing. The reason is that the amount of grass

nibbled by any particular farmer's sheep was fairly small, and so no particular farmer's decision

to graze his sheep on the commons was likely to result in the destruction of that land. This meant

that everyone had an incentive graze their sheep on the commons regardless of what others did.

Their actions generated a significant benefit for themselves, along with a tiny negative externality

for the others (in the form of depletion of the common). As a result, more people grazed their

sheep on the commons than the land could sustain, and so it was eventually destroyed.

This exact same scenario was replayed in the Canadian east coast fishery, and was

responsible for the destruction of the cod stocks. Again, no particular fisher could catch enough

to demolish the stocks, and so everyone had an incentive to catch as much as he or she could.

The government set quotas, but everyone in the fishery had an incentive to pressure the

government to set their quotas as high as possible, or to disregard them whenever feasible. As a

result, the fishing industry destroyed its own resource base. (Of course, as in any collective action

problem, everybody blames someone else for the outcome. This has superficial plausibility,

because the harm that each individual suffers is in fact generated by someone else. But this

Normative Economics/Chapter 2 75

obscures the fact that, in collective action problems, everyone harms someone else, and is in turn

harmed by someone else. As a result, everyone is equally responsible for the outcome.)

What all of these examples show is the resources will be "overused" if people are given

unrestricted access to them. Using resources at a sustainable level is a social good, i.e. it provides

a positive externality for others members of the society, and possibly future generations. If

individuals are left free to consume as much as they can of the resource, they will not in general

produce this externality, but will instead deplete the resource. Thus every "conservation" problem

turns out, upon closer inspection, to be just one more type of collective action problem.

Key Words

action

belief-desire psychology

collective action problem

decision theory

equilibrium

ex ante

ex post

expected utility

externalities

free-rider

game theory

information set

instrumental rationality

Nash equilibrium

outcome

payoff

race to the bottom

state

state of nature

sucker