aenorm 58

69
1

Upload: vsae

Post on 07-Mar-2016

229 views

Category:

Documents


2 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Aenorm 58

1

Page 2: Aenorm 58

2 AENORM 58 January 2008

Preface

Behold...a new year is upon us

As I’m sitting here in a room at the University of Amsterdam on a very calm night, I just realized that another year has passed. A year which was quite turbulent for us econom-

etricians, I might say. The “war on terrorism” policy of president Bush, with a gigantic amount of dollars on expenses involving, eventually didn’t turn out to be the worst thing for the American economy during 2007. Even the numerous scandals around American compa-nies assisting the American government in Iraq, like Blackwater, couldn’t contribute to this. No, the most dreadful event for us econometricians in 2007 has to be the collapse of the American housing market. Up till now the rate of the Dollar is dropping down and exchanges all over the world are affected by this crisis in America. Do we have to fear another stock market crash or are we just overreacting?

Also Aenorm went through a turbulent year. One of the main concerns of the redactional staff of Aenorm during last year, was to broaden the group of readers and to make this group a bit more international. This resulted in an edition especially for the participants of the Econometric Game and a rise of the circulation of our magazine upto 1900 copies. The content of Aenorm always was morealess in English, but last year we made the decision to no longer publish any, not even one, Dutch words in Aenorm, which makes it a lot more readable for our international subscribers all over the world.

A new board will bring the VSAE to new hights during the upcoming year and it's time for me to pass the ambt of chief editor of Aenorm through to one of these boardmembers with a lot of fresh and new ideas. I can only wish him all the best and sure hope he continues to improve this magazine on certain levels.

A chief editor always has to realize that he would be nothing without his full redactional staff. David Hollanders states in his article in this edition that fortunately we observe cooperation in the real world, although economic theory doesn’t always take it into account. Hereby, I would like to thank my staff acting not too economical with all the cooperation they gave me during last year.

Erik Beckers

Page 3: Aenorm 58

33

Cover design:Carmen Cebrián

Aenorm has a circulation of 2000 copies for all students Actuarial Sciences and Econometrics & Operations Research at the Uni-versity of Amsterdam and for all students in Econometrics at the Free University of Amster-dam. Aenorm is also distributed among all alumni of the VSAE.

Aenorm is a joint publication of VSAE and Kraket. A free subsciption can be obtained at www.aenorm.nl.

Insertion of an article does not mean that the opinion of the board of the VSAE, the board of Kraket or the redactional staff is verbalized. Nothing from this magazine can be duplicated without permission of VSAE or Kraket. No rights can be taken from the content of this maga-zine.

© 2008 VSAE/Kraket

Quantifying Operational Risk in Insurance Companies 24

Most empirical evidence suggests that the Fisher effect, stating that inflation and nominal interest rates should cointegrate with a unit slope on inflation, does not hold, a finding at odds with many theoretical models. This paper argues that these results can be attributed in part to the low power of univariate tests, and that the use of panel data can generate more powerful tests. For this purpose, we propose two new powerful panel cointegration tests that can be applied under very general conditions.

Youbaraj Paudel

Limit Cycles and Multiple Attractors in Logit Dynamics 4

There is a growing body of experimental research documenting non-Nash equilibrium play, non-convergence and cyclical patterns in various learning algorithms. This article shows that, even for 'simple' three-strategy games, periodic attractors do occur under an alternative, rationalistic way of modelling evolution in games, namely the Logit Dynamics. Secondly, it presents numerical evidence for multiple interior equilibria in a Coordination game created via a sequence of saddle-node bifurcations.

Marius Ionut-Ochea

Effects of invalid and possibly weak instruments 31

This article examines IV (instrumental variables) estimation when instruments may be invalid. The limiting normal distribution of inconsistent IV is derived and provides a firstorder asymptotic approximation to the density in finite sample. In a specific simple model this approximatioen will be scanned and compared with the simulated empirical distribution with regard to measures for model fit, simultaneity, instrument invalidity and instrument weakness.

Jan Kiviet and Jerzy Niemczyk

Dice Games and Stochastic Dynamic Programming 37

This article considers stochastic optimization problems that are fun and instructive for teaching purposes on the one hand and involve challenging research questions on the other hand. These optimization problems arise from the dice game Pig and the related dice game Hog. The article concentrates on the practical question of how to compute an optimal control rule for various situations. This will done by using the technique of stochastic dynamic programming.

Henk Tijms

A Probabilistic Analysis of a Fortune-wheel Game 15

In this article a probabilistic analysis is given for a simple fortune-wheel game. Each player tries to get a higher score than any of his opponents. To achieve this goal each player must decide to turn the wheel a second time or not. Turning the wheel a second time has its price, because it can result in a zero score. In the article the optimal strategies are calculated for the case of two and three players. Simulations are added for the case of more than three players.

Rein Nobel and Suzanne van der Ster

The secondary insurance market 9

In the case of surrender, a policyholder can only sell the right of ownership of his policy to the insurer which has written the policy; insurers have a so-called buyer’s monopoly. In the past decade, secondary insurance markets are rising in foreign countries. This article will first of all give a historical overview of the origin of the market and afterwards will go into the subject of the current situation. Subsequently it will set apart the advantages and disadvantages of the secondary insurance market. Finally the article accounts for the possibilities of the arising of such a market in the Netherlands.

Martijn Visser

Page 4: Aenorm 58

4 AENORM 58 January 2008

Aenorm 58 Contents List

4

Volume 15Edition 57October 2007ISSN 1568-2188

Chief editor:Erik Beckers

Editorial Board:Daniëlla BralsSiemen van der Werff

Design:Carmen Cebrián

Lay-out:Jeroen Buitendijk

Editorial staff:Raymon BadloeErik BeckersDaniëlla BralsJeroen BuitendijkNynke de GrootMarieke KleinHylke SpoelstraSiemen van der Werff

Advertisers:AegonAONDeloitteDelta LloydDe Nederlandsche BankIbisIMCNationale NederlandenPricewaterhousCoopersMercerORTECPGGMSNS ReaalTNOTowers PerrinWatson Wyatt Worldwide

Puzzle 67

Facultative 68

Introduction of the no-claim protector: Research of the consequences for the Dutch car-insurance market 52

During the start of 2007, a new product was introduced in the insurance market: the no-claim protector. The no-claim protector would lead to significant cost savings for the policyholders according to the insurers.. This article summarises a research of the efficiency of a bonus-malus system including the ncno-claim -protector. The goal of this research is to describe for which drivers the no-claim protector is a profitable investment.

Hein Harlaar

Conflicting interests in supply chain optimization 56

An elementary aspect of production planning is the relation between setup costs for starting a production run and the costs for holding inventory. One can link two of these problems in a supply chain where one producer supplies an intermediate product for the next producer. In this case we can still handle this multi-level production problem if we are allowed to see it as one optimization problem. But what if the supplier and customer have different goals in mind?

Reinder Lok

Coalitional and Strategic Bargaining: the Relevance of Structure 60

This article aims to be a first initiative and obviously must leave many questions unanswered. It focuses on the relevance of the degree of bargaining structure. In particular, the goal is to know whether a lower degree of structure increases the predictive power of coalitional solution concepts and whether it has an influence on the relative bargaining power of the players. Concretely the article looks at two bargaining situations with the same bargaining problem.

Adriaan de Groot Ruiz

A Bayesian Approach to Medical Reasoning 43

Medical reasoning is a complex form of human problem-solving that requires the physician to take appropriate action in a world that is characterized by uncertainty. Dynamic Bayesian networks are put forward as a framework that allows the representation and solution of medical decision problems, and their use is exemplified by a case study in oncology. This paper makes the case for dynamic Bayesian networks as a formalism for the representation of medical knowledge and the execution of medical tasks such as diagnosis and prognosis.

Marcel van Gerven

Micro-foundations are useful, up to a point 48

Economic analysis rests on the assumption of methodological individualism. So, no matter what the macro-phenomena under study might be, economic analysis tries to explain it as the (equilibrium)outcome of the games rational individuals play. It is totally true that cooperation is the direct result of individual choices, but whether cooperation can be understood as such, is exactly the topic under discussion.

David Hollanders

Page 5: Aenorm 58

5

Volume 15Edition 57October 2007ISSN 1568-2188

Chief editor:Erik Beckers

Editorial Board:Daniëlla BralsSiemen van der Werff

Design:Carmen Cebrián

Lay-out:Jeroen Buitendijk

Editorial staff:Raymon BadloeErik BeckersDaniëlla BralsJeroen BuitendijkNynke de GrootMarieke KleinHylke SpoelstraSiemen van der Werff

Advertisers:AegonAONDeloitteDelta LloydDe Nederlandsche BankIbisIMCNationale NederlandenPricewaterhousCoopersMercerORTECPGGMSNS ReaalTNOTowers PerrinWatson Wyatt Worldwide

Limit Cycles and Multiple Attractors in Logit Dynamics

There is a growing body of experimental research documenting non-Nash equilibrium play, non-convergence and cyclical patterns in various learn- ing algorithms (see Camerer (2003) for an overview). On the theory side, within the evolutionary game dynamics literature an important result is Zeeman (1980) conjecture that there are no generic Hopf bifurcations in the case of three strategies games: “When n = 3 all Hopf bifurcations are degenerate under Replicator Dynamics”. This means that no limit cycles are possible under Replicator Dynamics for 3 x 3 games. Here we show that, even for such 'simple' three-strategy games, periodic attractors do occur under an alternative, rationalistic way of modelling evolution in games, namely the Logit Dynamics. Second, we present numerical evidence for multiple interior equilibria in a Coordination game created via a sequence of saddle-node bifurcations.

Marius-Ionut Ochea

is a PhD student at the Center for Nonlinear Dynamics in Economics and Finance(CeNDEF), University of Amsterdam.For comments about this article the author can be contacted vai e-mail: [email protected]

From Replicator to Logit Dynamics

Evolutionary game theory deals with games played within a large population over long time horizon(evolution scale). Its main ingredients are the underlying normal form game - with payoff matrix A[n x n] - and the evolutionary dynamic class which defines a dynamical sys-tem on the state of the population. In a sym-metric framework, the strategic interaction takes the form of random matching with each of the two players choosing from a finite set of available strategies E = {E1,E2,...,En} For every time t; x(t) denotes the n-dimensional vector of frequencies for each strategy/type Ei and be-longs to the n - 1 dimensional simplex:

nn n

ii

x x1

1

: 1 .−

=

⎧ ⎫⎪ ⎪Δ = ∈ =⎨ ⎬⎪ ⎪⎩ ⎭

The assumption of random interactions proves crucial for the linearity of strategy Ei payoff: this would be simply determined by averaging the payoffs from each strategic interaction with weights given by the state of the population x. Denoting with f(x) the payoff vector, its compo-nents - individual payoff or fitness of strategy iin biological terms - are:

fi(x) =(Ax)i (1)

Sandholm (2006) rigorously defines an evo-lutionary dynamics as a map assigning to each population game a differential equation

( )=x V x on the simplex n 1−Δ : In order to de-rive such an ‘aggregate’ level vector field from individual choices he introduces a revision pro-tocol ( (x), x) indicating, for each pair ( , ), the rate of switching( ) from the currently played strategy to strategy : The mean vec-tor field is obtained as:

( )

( )( ) ( )( )

i i

n n

j ji i ijj j=1

x i -

i

x ρ f x ρ f1

inflow into strategyoutflow from strategy

, ,

V x

x x x x=

= =

= −∑ ∑ (2)

Based on the computational requirements/qual-ity of the revision protocol the set of evolu-tionary dynamics splits into two large classes: imitative dynamics and pairwise comparison

. The first class is represented by the Replicator Dynamic (Taylor and Jonker (1978) ) which can be easily derived by substi-tuting into (2) the pairwise proportional revi-sion protocol:

= (player switches to strategy at a rate proportional with the prob-ability of meeting an -strategist( ) and with the excess payoff of opponent if positive):

( ) ( ) ( )i i i i ix x f f xx x Ax xAx⎡ ⎤ ⎡ ⎤= − = −⎣ ⎦⎣ ⎦ (3)

where ( )f x xAx= is the average population

Econometrics

Page 6: Aenorm 58

6 AENORM 58 January 2008

payoff.Replicator Dynamics found applications in bio-logical, genetic or chemical systems, those do-mains where organisms, genes or molecules evolve over time via . From the perspective of strategic interaction the main criticism of the ‘biological’ game-theoretic models is targeted at the intensive use of pre-programmed, simple imitative play with no role for optimization and innovation. Specifically, in the transition from animal contests and biol-ogy to human interactions and economics the Replicator Dynamics seems no longer adequate to model the rationalistic and ‘competent’ forms of behaviour. Best Response Dynamics would be more applicable to human interaction as it assumes that agents are able to optimally com-pute and play a (myopic) ‘best response’ to the current state of the population.Apart from the highly unrealistic assumptions regarding agents capacity to compute a perfect best reply to a given population state there is also the drawback that it defines a differential inclusion, i.e. a set-valued function. The best responses may not be unique and multiple tra-jectory paths can emerge from the same initial conditions. A ‘smoothed’ approximation of the Best Reply dynamics - the Logit dynamics - was introduced by Fudenberg and Levine (1998); it was obtained by stochastically perturbing the payoff vector and deriving the Logit revi-sion protocol:

( )( )( )

( )

( )( )

-1j

ij -1kk

-1i

-1kk

η fρ f

η f

η

η

exp,

exp

exp

exp

xx x

x

Ax

Ax

⎡ ⎤⎣ ⎦=⎡ ⎤⎣ ⎦

⎡ ⎤⎣ ⎦=⎡ ⎤⎣ ⎦

(4)

where > 0 is the noise level. Here rep-resents the probability of player switching to strategy when provided with a revision oppor-tunity. For high levels of noise the choice is fully random (no optimization) while for close to zero the switching probability is almost one.Plugging the Logit revision protocol (4) back into the general form of the mean field dynamic (2) and making the substitution we ob-tain a well-behaved system of ode’s, the Logit dynamics as a function of the intensity of choice (Brock and Hommes (1997) ) parameter :

( )( )

ii i

kk

βx x

β

exp

exp

Ax

Ax

⎡ ⎤⎣ ⎦= −⎡ ⎤⎣ ⎦∑ (5)

→ ∞ the probability of switching to the dis-crete ‘best response’ is close to one while for a very low intensity of choice ( → 0) the switch-ing rate is independent of the actual perform-ance of the alternative strategies (equal prob-

ability mass is put on each of them).Mathematically it is a ‘smoothed’, well-behaved dynamics while from the strategic interaction point of view it models a player.

Rock-Scissors-Paper

The first 3-strategy example we look at is a generalization of the classical game of cyclical competition, Rock-Scissor-Paper as discussed in Hofbauer and Sigmund(2003) with > 0. In general, any game can be transformed into a ‘simplest’ form by substracting a constant from each column such that all the elements of the main diagonal are zeros (Zeeman (1980)):

δ εA ε δ

δ ε

00

0

−⎛ ⎞⎜ ⎟= −⎜ ⎟⎜ ⎟−⎝ ⎠

(6)

The replicator equation (3) with the game ma-trix (6) induce on the 2-simplex the following vector field:

( )( ) ( )

( )( ) ( )

( )( ) ( )

x x yδ zε x yδ zε

y xε zδ z xδ yε

y y xε zδ x yδ zε

y xε zδ z xδ yε

z z xδ yε x yδ zε

y xε zδ z xδ yε

[ (

)]

[ (

)]

[ (

)]

⎡ ⎤= − − − +⎢ ⎥

− + + −⎢ ⎥⎢ ⎥

= − + − − +⎢ ⎥⎢ ⎥− + + −⎢ ⎥⎢ ⎥= + − − +⎢ ⎥⎢ ⎥− + + −⎢ ⎥⎣ ⎦

(7)

Proposition 1

Proof It hinges on the computation of the first Lyapunov coefficient as = 0 which means that there is a first degeneracy in the third order terms from the Taylor expansion of the normal form. The detected bifurcation is a or bifurcation (as-suming away other higher order degeneracies: technically, the second Lyapunov coefficient should not vanish). Although, in general, the orbital structure at a degenerate Hopf bifurca-tion may be extremely complicated for our par-ticular vector field induced by the Replicator a

of cycles is born at the criti-cal parameter value.

The logit evolutionary dynamics (5) applied to our normal form game (6) leads to the follow-ing vector field:

x x yδ zε x yδ zε y xε zδ z xδ yεy y xε zδ x yδ zε y xε zδ z xδ yεz z xδ yε x yδ zε y xε zδ z xδ yε

[ ( ( )( ) ( ))][ ( ( )( ) ( ))][ ( ( )( ) ( ))]

= − − − +⎡ ⎤⎢ ⎥− + + −⎢ ⎥⎢ ⎥= − + − − +⎢ ⎥

− + + −⎢ ⎥⎢ ⎥= − − − +⎢ ⎥

− + + −⎢ ⎥⎣ ⎦

(8)

Econometrics

Page 7: Aenorm 58

7

Figure 1. Replicator Dynamics and RSP game for fixed = 1 and different . (a) unstable focus, (b) degenerate Hopf bifurcation for = 1, (d) stable focus

Proposition 2

Proof. In order to show that Hopf bifurcation is non-degenerate we have to compute again the first Lyapunov coefficient and check whether it is non-zero. The analytical form of this coefficient takes a complicated expression of exponential terms which, after some tediouscomputations, boils down to:

( ) ( )( )

( )

Hopfδε δ ε

l β ε δε δ

ε δ

2 2

1 2

864, , 0,

3 3 3

0.

+ += − <

∀ > >

Typical trajectories of this route to cycling are shown in Figure 2 below. We notice that as moves up from 10 to 35 (i.e. the noise level is decreasing) the interior stable steady state looses stability via a supercritical, non-catastrop-hic Hopf bifurcation and a small, sta-

Figure 2. Logit Dynamics and RSP. fixed = 1, = 0,8, free . (a) stable focus, (b) generic Hopf bifurcation, (c)-stable limit cycle

ble limit cycle emerges around the now unsta-ble steady state. Unlike Replicator Dynamics, stable cyclic behavior does occur under the Logit dynamics even for three-strategy games.

3 x 3 Coordination Game

Using topological arguments, Zeeman (1980) shows that stable games have at most one interior isolated fixed point under Replicator Dynamics(Theorem 3). In particular, fold catas-trophe (two isolated fixed points which collide and disappear when some parameter is varied) cannot occur within the simplex. In this section we provide - by means of the classical coordi-nation game - numerical evidence for the oc-currence of multiple, isolated interior steady-states under the Logit Dynamics and conjecture that fold catastrophe is possible when we al-ter the intensity of choice . The coordination game we consider is given by the following pay-off matrix:

EconometricsEconometrics

Page 8: Aenorm 58

8 AENORM 58 January 2008

Wat alszijn overboekingnaar Hong Konghalverwege de wegkwijtraakt?Een paar miljoen overmaken is zo gebeurd. Binnen enkele

seconden is het aan de andere kant van de wereld. Hij twijfelt

er niet aan dat zijn geld de juiste bestemming bereikt. Het

gaat immers altijd goed. Maar wat als het toch van de weg af

zou raken? Door hackers, fraude of een computerstoring?

Daarom levert de Nederlandsche Bank (DNB) een bijdrage

aan een zo soepel en veilig mogelijk betalingsverkeer. We

onderhouden de betaalsystemen, grijpen in als problemen

ontstaan en onderzoeken nieuwe betaalmogelijkheden. Het

betalingsverkeer in goede banen leiden, is niet de enige taak

van DNB. We houden ook toezicht op de financiële instel-

lingen en dragen – als onderdeel van het Europese Stelsel

van Centrale Banken – bij aan een solide monetair beleid.

Zo maken we ons sterk voor de financiële stabiliteit van

Nederland. Want vertrouwen in ons financiële stelsel is de

voorwaarde voor welvaart en een gezonde economie. Wil jij

daaraan meewerken? Kijk dan op www.werkenbijdnb.nl.

Werken aan vertrouwen.

-00024_B_210x297_OF.indd 1 13-03-2006 15:57:39

Page 9: Aenorm 58

9

Wat alszijn overboekingnaar Hong Konghalverwege de wegkwijtraakt?Een paar miljoen overmaken is zo gebeurd. Binnen enkele

seconden is het aan de andere kant van de wereld. Hij twijfelt

er niet aan dat zijn geld de juiste bestemming bereikt. Het

gaat immers altijd goed. Maar wat als het toch van de weg af

zou raken? Door hackers, fraude of een computerstoring?

Daarom levert de Nederlandsche Bank (DNB) een bijdrage

aan een zo soepel en veilig mogelijk betalingsverkeer. We

onderhouden de betaalsystemen, grijpen in als problemen

ontstaan en onderzoeken nieuwe betaalmogelijkheden. Het

betalingsverkeer in goede banen leiden, is niet de enige taak

van DNB. We houden ook toezicht op de financiële instel-

lingen en dragen – als onderdeel van het Europese Stelsel

van Centrale Banken – bij aan een solide monetair beleid.

Zo maken we ons sterk voor de financiële stabiliteit van

Nederland. Want vertrouwen in ons financiële stelsel is de

voorwaarde voor welvaart en een gezonde economie. Wil jij

daaraan meewerken? Kijk dan op www.werkenbijdnb.nl.

Werken aan vertrouwen.

-00024_B_210x297_OF.indd 1 13-03-2006 15:57:39

( )ε

A εε

1 0 00 1 0 , 0,10 0 1

−⎛ ⎞⎜ ⎟= ∈⎜ ⎟⎜ ⎟+⎝ ⎠

Proposition 3 Unlike Replicator, the Logit Dynamics may display multiple, interior iso-lated steady states created via a fold bifur-cation. In the particular case of a 3-strategy Coordination game, three interior stable steady states emerge through a sequence of two sad-dle-node bifurcations.

Figure 3 depicts the fold bifurcations scenario by which the multiple, interior fixed points appear when the intensity of choice(Panel (a)) or the payoff parameter(Panel (b)) changes. For small values of the unique, interior stable steady state is the barycentrum (1/3, 1/3, 1/3). A fold bifurcation occurs at = 2.7 and two new fixed points appear, one stable and one unstable. If we increase even further ( ≈ 3.5) a second fold bifurcation takes place and two additional equilibria emerge. A third saddle-node bifurca-tion occurs at ≈ 3.5 completing the steady states unfolding process. A similar pattern is visible in the payoff parameter space (Panel (b)) where a family of fold bifurcations is obtained for different values of the switching intensity.

Figure 3. Equilibria curves as function of model parameters

The numerical computation of the basins of at-traction for different equilibria reveals desirable properties of the Logit dynamics from an social welfare perspective. While for extreme values of the switching parameters the basins of at-traction are similar in size with the Replicator Dynamics for moderate levels of rationality the population manages to coordinate close to the Pareto optimal Nash equilibria.

Conclusions

This research was motivated by the idea of iden-tifying periodic under a ‘rationalistic’ evolutionary dynamics, namely the smoothed best-reply or Logit dynamics. Consequently, in an evolutionary Rock-Scissors-Paper game, we showed, by means of normal form computa-tions, that, unlike Replicator, the Logit dynam-ics is generic even for three strategy games and stable cycles emerge via a supercritical Hopf bifurcation. These periodic attractors can be generated by varying either the payoff or behavioral parameters. Moreover, via nu-merical computations on a 3x3 Coordination game, we showed that the Logit may display multiple, isolated, interior steady states (cre-ated via a sequence of fold bifurcations), a phenomenon known not to occur under the Replicator Dynamics. Interestingly, bounded rationality (i.e. small ) may help coordination close to the Pareto optimal equilibrium irrespec-tive of the initial mixture of the population.

References

Brock, W. and Hommes, C.H. (1997). A Rational Route to Randomness, Econometrica, 65, 1059-1095.

Camerer, C.F. (2003). Behavioral Game Theory: Experiments in Strategic Interaction. Princeton University Press.

Hofbauer, J., and Sigmund, K. (2003). Evolutionary Games and Population Dynamics. Cambridge University Press, Cambridge, UK.

Sandholm, W. (2006). Population Games and Evolutionary Dynamics. Unpublished manu-script, University of Wisconsin.

Schuster, P., Schnabl, W., Stadler, P.F. and Forst, C. (1991). Full characterization of a strange attractor: chaotic dynamics in low-dimension-al replicator systems, Physica D, 48, 65-90.

Taylor, P.D. and Jonker, L. (1978). Evolutionarily stable strategies and game dynamics. Mathematical Biosciences, 40, 145-156.

Zeeman, E.C. (1980). Population dynamics from game theory. In: Global Theory of Dynamical Systems. Lecture Notes in Mathematics 819. New York: Springer.

Econometrics

Page 10: Aenorm 58

10 AENORM 58 January 2008

The secondary insurance market

In the case of surrender, a policyholder can only sell the right of ownership of his policy to the insurer which has written the policy; insurers have a so-called buyer’s monopoly. In the past decade, secondary insurance markets are rising in foreign countries. On this market, policyholders can also sell their right of ownership to other parties as well as the original insurer. This development may influence the way in which insurers determine the surrender values of insurance products. Up to this moment there hasn’t been published much in the Netherlands on so-called secondary insurance markets and their possible consequences. In this article first of all I will give a historical overview of the origin of the market and then I will discuss the current situation. Subsequently I will set apart the advantages and disadvantages of the secondary insurance market. Finally I amplify the possibilities of the arising of such a market in the Netherlands.

Martijn Visser

has gained his Master in actuarial sciences at the University of Amsterdam (UvA) in February 2007. He has written his Master thesis “Afkoop: een ondergewaardeerd onderwerp” under supervision of Willem Jan Willemse. This thesis was partly based on an internal research for a Dutch insurer. This article is based on a chapter of this thesis.

Several countries – mostly the United States, but for example also Great Britain – know an alternative for the surrender of insurance poli-cies: sale on the secondary insurance market. This article discuss this market, where the right of ownership of a life insurance policy can be sold to a third party, who does not have any interest in the life of the insured. There hasn’t been published much on the secondary mar-ket for life insurances. The first actuarial arti-cle on this kind of markets appeared in 1996 (McGurk, 1996). In 2003, the first study on the economical effects of this market appeared (Bhattacharya et al., 2003). Since 2005 there has been published a lot more on this subject.

The origin of the secondary insurance market

Formerly, a secondary insurance market, on which the right of ownership of policies is sold to third parties, only existed as an “underground” market. It was a market on which a lot of spec-ulation took place. This speculation evolved out of the origin of the insurance, where gambling played a crucial role.To prohibit the substantial gambling market on human lives, in 1774 the Life Assurance Act was adopted in Great Britain. This law prohib-ited policies issued on lives in which the policy-holder had no interest. Through jurisprudence the legal interpretation of this “insured inter-est” was developed. Eventually, in the case Dalby versus India & London Life in 1854, it was determined that this interest only had to exist when the policy was issued. Consequently, policies in Great Britain are transferable after they are issued. The first public auction of poli-cies in Great Britain already took place in 1843 (McGurk, 1996).

Elizur Wright, the founder of the American ac-tuarial school, visited such an auction of exist-ing life insurance policies on the London “Royal Exchange” in 1844. Wright worried about the lack of regulations involving these transactions and even compared them to slavery. At his re-turn in the United States he introduced the right of surrender in his homestate Massachusetts (the possibility of surrender already existed at several insurers). This right compelled the insurer to always buy back their issued policies. Furthermore, Wright enacted the insurers to calculate the surrender value through the so-called “cash value for-mula”. Only policies with a substantially higher market value than the value attached by the insurer were allowed to be traded on the sec-ondary market. The main purpose of Wright’s reformations was to create a fair situation for policyholders who wished to end their policies. However, these ref-ormations also created a buyer’s monopoly: the policyholder is only able to sell his policy to the insurer. To breach this monopoly, buyers who were willing to pay more than the surrender value should be found. Therefore, the right to sell no longer desired insurance policies to third parties was acknowledged in the United States around 1900 (Belth, 2002; Coventry, 2006).

Actuarial Sciences

Als afgestudeerde wil je graag direct aan de slag. Bij ORTEC

hoef je hier niet lang op te wachten. Je wordt direct op

projecten ingezet en krijgt veel eigen verantwoordelijkheid.

Bij ORTEC werken veel studenten. Sommigen schrijven bij

ons een afstudeerscriptie, anderen werken enkele dagen

per week als studentassistent.

Maar je staat er nooit alleen voor. Je kunt rekenen op

de expertise van je collega’s: stuk voor stuk experts

op het gebied van complexe optimalisatievraagstukken in

diverse logistieke en financiële sectoren. Hoogopgeleide,

veelal jonge mensen die weten wat ze doen en jou naar een

hoger niveau zullen brengen. Samen met je collega’s help je

klanten gefundeerde beslissingen te nemen. Dit doe je met

gebruik van wiskundige modellen en het toepassen van

simulatie- en optimalisatietechnieken.

Vanwege onze constante groei is ORTEC altijd op zoek

naar enthousiaste studenten en afgestudeerden die de

ruimte zoeken om zich te ontwikkelen en willen bijdragen

aan de volgende generatie optimalisatietechnologie.

Hiervoor denken we aan bèta’s in de studierichtingen:

• Econometrie

• Operationele Research

• Informatica

• Wiskunde

Kijk voor vacatures en afstudeerplaatsen eens op

www.ortec.com/atwork. Zit jouw ideale functie of afstu-

deerplek er niet bij, stuur dan een open sollicitatie of

scriptievoorstel naar [email protected].

EPROFESSIONALS IN PLANNING

Dat wil niet zeggen dat je van Mars moet komen

A06

62

Wij bieden je

A0662A4 snijtekens 5/1/07 9:35 AM Page 1

Page 11: Aenorm 58

11

Actuarial Sciences

Als afgestudeerde wil je graag direct aan de slag. Bij ORTEC

hoef je hier niet lang op te wachten. Je wordt direct op

projecten ingezet en krijgt veel eigen verantwoordelijkheid.

Bij ORTEC werken veel studenten. Sommigen schrijven bij

ons een afstudeerscriptie, anderen werken enkele dagen

per week als studentassistent.

Maar je staat er nooit alleen voor. Je kunt rekenen op

de expertise van je collega’s: stuk voor stuk experts

op het gebied van complexe optimalisatievraagstukken in

diverse logistieke en financiële sectoren. Hoogopgeleide,

veelal jonge mensen die weten wat ze doen en jou naar een

hoger niveau zullen brengen. Samen met je collega’s help je

klanten gefundeerde beslissingen te nemen. Dit doe je met

gebruik van wiskundige modellen en het toepassen van

simulatie- en optimalisatietechnieken.

Vanwege onze constante groei is ORTEC altijd op zoek

naar enthousiaste studenten en afgestudeerden die de

ruimte zoeken om zich te ontwikkelen en willen bijdragen

aan de volgende generatie optimalisatietechnologie.

Hiervoor denken we aan bèta’s in de studierichtingen:

• Econometrie

• Operationele Research

• Informatica

• Wiskunde

Kijk voor vacatures en afstudeerplaatsen eens op

www.ortec.com/atwork. Zit jouw ideale functie of afstu-

deerplek er niet bij, stuur dan een open sollicitatie of

scriptievoorstel naar [email protected].

EPROFESSIONALS IN PLANNING

Dat wil niet zeggen dat je van Mars moet komen

A06

62

Wij bieden je

A0662A4 snijtekens 5/1/07 9:35 AM Page 1

Page 12: Aenorm 58

12 AENORM 58 January 2008

Strong marketgrowth through aids-epi-dimic

For a long time the secondary insurance mar-ket remained limited in both Great Britain as the United States. The strong growth of the market in both countries is a relatively recent phenomenon and arose in the late eighties as a response on the aids-epidemic. Many young HIV-infected people were in a rapid need of money to pay for their medication and to at-tain their standard of living. For these people, long term assets had lost their value and in a search for cash they attempted to sell their life insurance policies. Because of their strongly de-creased life expectancy, the discounted value of the payment at death greatly increased and broadly exceeded the surrender value. Thus an underground, speculative market arosed, which still contributes to the negative image of the modern secondary insurance market (Doherty & Singer, 2002). The market grew rapidly when official compa-nies started to play an active role in the valu-ation of policies and acted as an intermediary between buyers and sellers during the end of the eighties,. “Policy Network” became the first company which played this active role in Great Britain in 1988 (McGurk, 1996) and “Living Benefits” was the first one to act on the second-ary market in the United States in 1989 (Belth, 2002). In addition, in Germany at least one organisa-tion is active dealing in policies on the second-ary insurance market. The German trade asso-ciation “Bundesverband Vermögens-anlagen im Zweitmarkt Lebensversicherungen” (BVZL) acts in the same way as players on the American and the British secondary markets.

Search for new growth opportunities

At the turn of the century, the medication for aids-patients improved substantially, so the life expectancy of aids-patients increased as well and the secondary market became less profit-able. This led to a search for new growth oppor-tunities. Through further development of their straightforward models and methods, investors are nowadays able to buy policies of non-ter-minal patients with a declined life expectancy on the secondary market as well (Doherty & Singer, 2002).The secondary market for terminal patients is called the “viatical” market. Improved tech-niques and a search for new markets, because of diminishing profit margins, led to the crea-tion of the so-called “life settlements” market. Policies of individuals aged older than 65 years could now also be purchased, given that they have experienced a decline in their health and

a remaining life expectancy of between six and twelve years. With a longer life expectancy, the longevity risk is too much substantial, which leads to a spreading of the yield over additional years, so that the investment return will be to low (Doherty & Singer, 2002). Table 1 provides a clear summary of the characterizations of both markets in the United States.

Characteristics Viatical market “Life settle-ment” market

Insured amount

< $100.000, mostly be-

tween $25.000 and $50.000

> $100.000, mostly more

than $250.000

Policyholder Terminal pa-tients between

25 and 40 years old

Individuals older then 65

years.

Remaining life expectancy

< 2 years, mostly less

than 12 months.

> 2 years, mostly up to

12 or 15 years old

Table 1: Viatical market versus “Life settlement” market in the United States

Source: Deloitte (2005)

Negative attitude of insurers

Insurers responded mainly in a negative way to the presence of the secondary market. A sub-stantial part of this negativism can probably be explained by the loss of the buyer’s monopoly for the insurers, which put their surrender re-sults under pressure. Insurers have lobbied a lot in the United States to prevent further growth of the secondary market. Moreover, insurers have taken arrangements themselves to dimin-ish the attractiveness of the market. Since the end of the nineties, there exist so-called ADB’s (Accelarated Death Benefits) in the primary market: options to transfer the payment at death at a rate of 25 to 100% in advance, when certain conditions have been fulfilled (Doherty & Singer, 2002).

Functioning of the secondary insurance market: the relation with the surrender valueAs described before, on the secondary market the right of ownership of life insurance policies is sold to a third party, who hasn’t got any di-rect interest in the insured life. The policy liabil-ities, in contradiction to the case of surrender, still apply for the insurer. As a consequence of the transaction, the buyer receives a financial interest in an early death of the insured. For which reason do secondary insurance mar-kets really exist? Why do profits arise for buy-ers on this market? Aren’t insurers able to fill

1 Viatical is an old term for travelling money. For catholic people, the viaticum is the last Communion and part of the last sacraments.

Actuarial Sciences Actuarial Sciences

Page 13: Aenorm 58

13

Actuarial Sciences

this gap in the market? To answer these ques-tions it is necessary to look closer to the surren-der values assigned by insurers to surrendering policyholders. When a policyholder judges he no longer needs his policy, he wish to receive the value which is build up until that moment. When this option on surrender would not exist, the request for life insurance policies would diminish substantially through uncertainty upon future needs. This ex-plains why insurers make surrender possible. The surrender value for an individual policy-holder is calculated by means of the passed duration of the policy, the policy characteristics etc. However, an important aspect in explaining the existence of a secondary market is that the surrender value for an individual policyholder is calculated by the insurer based on the expected mortality trend and development of expenses and returns. When the expectations for an individual insured differ from this average expectation, the eco-nomical value of the individual policy will differ from the surrender value. Given that the mor-tality expectations for an individual insured are worse than the average mortality expectations, a payment at death will take place earlier than average. Therefore, the economical value of this policy is higher than the average economi-cal value, which is illustrated in figure 1.

The difference between the economical value of the average policy and the surrender value is the margin for the insurer, including a cost charge.

Other opportunities for a secondary mar-ket

This suggests that it is possible to sell policies with other deviations from the average expect-ed parameters through the secondary market. When, for example, the current expected inter-est rates are lower than the interest rates on which the surrender value is based, the eco-nomical value of policies is higher than the sur-render value as well. In Germany, the trade association BVZL experienced an enormous in-crease in purchases on the secondary market, when the interest rates and –expectations were low (AM, 2006).

The volume of the secondary market

What is the size of the secondary insurance market? During 2006, policies worth of 200 mil-lion pound were traded in Great Britain. This was estimated to be approximately 25% of the potential at that time (McGurk, 1996). During 2002, a total policy value of two billion dollar was traded on the American life settlements market. An estimated 20% of the policyhold-ers aged older than 65 years owns a policy for which the economical value exceeds the sur-

render value. The estimated value of all poli-cies of elderly people with a declined health was around $100 billion during 2002 (Conning, 2003). According to the “Viatical Association of America”, $50 million was sold on policies on the secondary market in 1990, during 1999 it was $1 billion and in 2001 it was between $2 and $4 billion (Doherty & Singer, 2002). Conforming Belth, many articles state that the secondary market is growing, while available data show the opposite. In table 2 are the transactions of one of the key players in the viatical market.

Supervision on the secondary market

Because of the strong growth of the secondary market, this market is under supervision since a while. The British supervisor is auditing in-termediaries on the British secondary market since 1992. 23 states in the United States have adopted laws for the life settlements market in 2005. Moreover, the national organizations of supervisors and intermediaries are developing

Actuarial Sciences

Surrender Value based on Normal Health

EconomicValue based on Normal Health

40 years 65 years Age

Value of policy

FaceValue

Policy Value for 65 yr old in very

poor health

Figure 1: Economic value and surrender value (Doherty & Singer, 2002)

"During 2002, a total policy value of two billion dollar was traded on the American life

settlements market. "

Page 14: Aenorm 58

14 AENORM 58 January 2008

regulations for this market, such as for market entry, basic training, obligations for all insur-ers to indicate the existence of the market to insured people and new information brochures (Deloitte, 2005).

Advantages and disadvantages of a secondary market

For insured people with a declined health who don’t want to be insured any more, the sec-ondary market provides a substantial higher value for a policy than the surrender value. At this moment the market is legal and organised. Generally, the liquidity on the insurance market is therefore increasing and the value of insur-ance products will increase through this option for the consumer. Finally, insurers maintain their contracts and advisors their commission for these policies, since those policyholders don’t lapse anymore. However, the secondary market encounters several ethical objections. The market would stimulate assassinating an insured, the policy can be resold several times without knowledge of the insured and confidential medical dossiers are accessible for buyers. Moreover, the mar-ket, such as it has arisen in the United States, is very sensitive to fraud. The insured is able to show a more declined health than is the case in reality. In addition, the market still should get better organised, since for the time being it lacks effective supervision and solid math-ematical methods. Finally, insurers protest against the market, because of a decrease in their lapse results.

Opportunities for a Dutch market

Finally, I want to discuss the opportunity of the arising of a secondary insurance market in the Netherlands. In principle, the Wet financieel toezicht and other laws don’t hinder the exist-ence of a Dutch secondary insurance market. The right to transfer the policy to a third party after enclosure is captured in the policy condi-tions. Insurers will not be eager to narrow this opportunity, since the value of an insurance as a financial instrument for consumers will then

be lowered. In 2004 the Authoriteit Financiële Markten (AFM) made clear that in their opinion a viatical market for insurances would be in breach with “decent morals” in the Netherlands. The reason for this decision is that for viatical settlements the investor obtains an interest in an early death of the insured. Likewise, the Minister of Finance has commented negatively on the de-sirability of a Dutch secondary market in the past (Ministerie van Financiën, kamervragen 29-3-2004). Dutch entrepreneurs are already acting on the American market and in april 2007 the Dutch national news agency ANP reported that more and more Dutchmen invest in secondary insur-ance policies in the United States.

Summary

On the secondary insurance market, the right of ownership of a life insurance policy can be sold to a third party, who does not have any interest in the life of the insured. Countries like the United States and Great Britain have such a market as an alternative for the surren-der of insurance policies. For a long time this market remained limited and “underground”, but it grew rapidly in the eighties as a response to the aids-epidemic. Currently, the market is also accessible for the elderly with a strongly declined health. The market exists because of the difference of the individual economic value of a policy (which is increased because of the expected early death) and the surrender value, which is calculated based on average (mortal-ity) assumptions. The market offers advan-tages as well as disadvantages for insurers and policyholders. In the Netherlands, AFM and the Minister of Finance strongly reject the origin of a secondary market.

Actuarial Sciences

Year Transactions

1995 $ 65.5 mln

1996 $ 46.3 mln

1997 $ 19.1 mln

1998 $ 27.4 mln

1999 $ 21.3 mln

2000 $ 10.2 mln

Table 2: Transactions at the American viatical company Life Partners (1995-2000 in millions of U.S. Dollars)

Source: Belth (2002)

Page 15: Aenorm 58

15

A Probabilistic Analysis of a Fortune-wheel Game

A so-called fortune wheel is given. This is a disc divided into twenty wedges, numbered from 1 to 20, and provided with a pointer which can be turned. When the pointer stops, it always points to one of the twenty wedges and thus points to an integer between 1 and 20. Now let n persons be given who play the following game: every player turns the wheel either one or two times; after the first turn he can decide to stop; in this case, his score is the result of the first turn. He can also decide to turn the wheel a second time; then his score is the sum of the first and the second turn, provided that this sum is at most 20; if this sum is larger than 20, his score is 0. The player with the highest score wins, on the understanding that when multiple players get the same score, the player who got this score first wins. We are interested in the optimal strategies for all players, that is strategies that maximize their probability of winning.

Rein Nobel

is an assistant professor in operations research at the Vrije Universiteit Amsterdam. He graduated in pure mathematics and in computer science. His main research interests are probability, Markov decision theory, queueing theory and simulation.’

Suzanne van der ster

is a bachelor student in Operations Research. She is currently finishing her bachelor thesis under supervision of Rein Nobel. The article was written during the period in which Suzanne was a student assistent for Rein Nobel.’

The two-player game

First we consider the case of two players [n = 2], A and B. Player A begins, so when both players get the same score A wins. Now we de-fine a so-called m-strategy. A player follows the m-strategy if he only turns the wheel a second time in case the result of the first turn is at most m. Let Xm be the score of a player who follows the m-strategy. We determine the probability distribution of Xm under the obvious conditions [all results of a turn are equally probable and separate turns are independent]. Then the col-lection of possible values for Xm, say WXm, is equal to the collection {0, 1, ..., 20}. Now we have to determine the probability IP(Xm = i) for every i ∈ WXm. This gives

11

1

1

( )1 1 ( 1) 0,20 20 8001 1 ( 1) 1,..., ,20 20 400

1 1 1 1 1,...,20.20 20 20 20 400

mm

ki

km

k

IP X ik m m i

i i m

m i m

=

=

=

= =

⎧ ⋅ = + =⎪⎪

−⎪ ⋅ = =⎨⎪⎪

+ ⋅ = + = +⎪⎩

∑∑

Next we introduce the random variable Y, being the score of player B. Naturally, the probabil-ity distribution of Y depends on m and on the strategy of player B. Since player B knows the score of player A, he will adjust his strategy to this. Reasoning tells us that the only rational strategy for player B is to play an a-strategy when player A gets score a. Therefore the con-ditional distribution of Y given Xm = a is equal to the distribution of Xa. Now we can calculate the probability that player B wins by splitting according to the partitioning events {Xm = i}, i = 0, 1, ..., 20 and subsequently conditioning on these events,

21 1(player wins) 1 ( [( ( 1))160000 2

( 1))2 1) 1 ] ( )6 20 400

1 ( 1)(2 1) [2870 ]),400 6

IP B m m

m m m m

m m m

= − +

+ +− + +

+ +− 1

where we use the following equalities

2

1

( 1)(2 1)6

m

i

m m mi=

+ +=∑

ORMORM

Page 16: Aenorm 58

16 AENORM 58 January 2008

Mijn fascinatie‘Economische theorieën inzetten voor maatschappelijke vraagstukken’

WOEconometristen/kwantitatief specialisten (medior/senior)Het team Economische Analyse van de businessunit Arbeid doet economisch onderzoek rond sociale zekerheid, reïntegratie en HRM. Onze opdrachtgevers zijn onder meer het ministerie van SZW, de Raad voor Werk en Inkomen, UWV, gemeenten en verzekeraars.Met jou kennis ontwikkel je economische methodieken en past deze toe op vraagstukken rond beleid en uitvoering. Denk daarbij aan netto-effectiviteitsstudies en kosten-batenanalyses.Lokatie: Hoofddorp

WOConsultant econometrieVoor bedrijven is het inzichtelijk hebben en onder controle houden van de kosten en baten essentieel. Het werkveld van de afdeling Business Innovation & Modelling (BIM) helpt haar klanten bij het verkrijgen van dit inzicht en vervolgens bij het maken van de juiste ICT gerelateerde keuzes. Hiertoe worden kwantitatieve modellen en methodieken ontwikkeld onder andere op het gebied van Total Cost of Ownership (TCO scans), Impactanalyses en Cost Modelling.Als consultant econometrie kom je te werken in een jonge innova-tieve project omgeving en houdt je bezig met het vertalen van finan-ciële ICT klantvragen in kwantitatieve beslissingsondersteunende modellen. Dit betreft onder andere TCO analyses, business cases en kostenmodellen. Ervaring met MSExcel/VBA is een pre.Lokatie: Delft

Geïntrigeerd door het werk bij TNO? Kijk voor de volledige functie-omschrijving op onze website en solliciteer direct online.

TNO_KL_47_Adv_A4_Aenorm_L.indd 1 30-11-2007 11:29:14

Page 17: Aenorm 58

17

Mijn fascinatie‘Economische theorieën inzetten voor maatschappelijke vraagstukken’

WOEconometristen/kwantitatief specialisten (medior/senior)Het team Economische Analyse van de businessunit Arbeid doet economisch onderzoek rond sociale zekerheid, reïntegratie en HRM. Onze opdrachtgevers zijn onder meer het ministerie van SZW, de Raad voor Werk en Inkomen, UWV, gemeenten en verzekeraars.Met jou kennis ontwikkel je economische methodieken en past deze toe op vraagstukken rond beleid en uitvoering. Denk daarbij aan netto-effectiviteitsstudies en kosten-batenanalyses.Lokatie: Hoofddorp

WOConsultant econometrieVoor bedrijven is het inzichtelijk hebben en onder controle houden van de kosten en baten essentieel. Het werkveld van de afdeling Business Innovation & Modelling (BIM) helpt haar klanten bij het verkrijgen van dit inzicht en vervolgens bij het maken van de juiste ICT gerelateerde keuzes. Hiertoe worden kwantitatieve modellen en methodieken ontwikkeld onder andere op het gebied van Total Cost of Ownership (TCO scans), Impactanalyses en Cost Modelling.Als consultant econometrie kom je te werken in een jonge innova-tieve project omgeving en houdt je bezig met het vertalen van finan-ciële ICT klantvragen in kwantitatieve beslissingsondersteunende modellen. Dit betreft onder andere TCO analyses, business cases en kostenmodellen. Ervaring met MSExcel/VBA is een pre.Lokatie: Delft

Geïntrigeerd door het werk bij TNO? Kijk voor de volledige functie-omschrijving op onze website en solliciteer direct online.

TNO_KL_47_Adv_A4_Aenorm_L.indd 1 30-11-2007 11:29:14

and

3 2

1

1( ( 1)) .2

m

i

i m m=

= +∑

The result above immediately gives the prob-ability that player A wins. Introduce = IP(player A wins). Then we obtain

21 1( ) [( ( 1))160000 2

( 1)(2 1) 1] ( )*6 20 400

1 ( 1)(2 1)[2870 ].400 6

α m m m

m m m m

m m m

= + −

+ ++ +

+ +−

We see that the probability that player A wins is a fourth-degree expression in m. Some simple algebra [MAPLE] gives

2

3 4

287 5733 1( )800 320000 15360

7 1 .160000 1920000

α m m m

m m

= + −

We can easily determine m* := arg max{ : m = 1, ..., 20} by calculating the value of for the different values of m. See Table 1.

Next, we calculate the probability distribution of

Y, the score of player B as a function of the m-strategy of player A. We start with calculating the probability that player B obtains a score j ≠ 0, by splitting and conditioning on the events {Xm = i},

1( ) [ ] ( )20 400

1 ( ) (**)400

j-1

mi=0

20

mi=j

iIP Y j IP X i

j IP X i

= = + = +

−= =

∑ 2

To be able to continue, we need to distinguish two cases, m < j - 1 and m ≥ j - 1. First we consider the case m < j - 1. Splitting the sum-

m m

1 60240/160000 11 38567/80000

2 31517/80000 12 38237/80000

3 6571/16000 13 15029/32000

4 6823/16000 14 1827/4000

5 14109/32000 15 7021/16000

6 18151/40000 16 2077/5000

7 37177/80000 17 61769/160000

8 3787/8000 18 1121/3200

9 15341/32000 19 4923/16000

10 7719/16000 20 4123/16000

Table 1: Winning probabilities for player A in the two-player game.

1 m m m m m mi i i

i m m i m mi i k i i k i i

IP B IP Y X IP Y X X i IP Y i X i IP Y i X i IP X i

iIP X i IP X i IP X i IP X k IP X i IP X i

i

20 20 20

0 0 020 20 20 20 20 20

0 0 1 0 1 0

(player wins) ( ) ( ; ) ( ; ) ( | ) ( )

1( ) ( ) ( ) ( ) ( ) [ ] ( )*20 400

(20

= = =

= = = + = = + =

= > = > = = > = = > = = =

> = = = = = = + = =

∑ ∑ ∑

∑ ∑ ∑ ∑ ∑ ∑m

m mi i i i m

m m

i i i

i i i i i m iIP X i IP X i

m m m mi i i i m m

20 20 202 2 2 2

0 0 1 120

2 2 2 2

1 1 1

1 1 1)[ ] ( )[1 ] 1 ( ) 1 ( ( ) )20 400 400 400 400 400 20 400 400

1 1 1 1 1 ( 1))2 1)1 ( ( 1) ( ) [ ]) 1 ( [( ( 1)) ]160000 20 400 400 160000 2 6

= = = = +

= = =

−+ = = − = − = = − + + =

+ +− − + + − = − + − +

∑ ∑ ∑ ∑

∑ ∑ ∑m m m m1 1 ( 1)(2 1) ( ) [2870 ]),

20 400 400 6+ +

+ −

2 j-1

m m i m i mi i i=0

j-120 20

i m m mi=j i=0 i=j

IP Y j IP Y j X i IP X i IP X j IP X i IP X j IP X i

i j IP X j IP X i IP X i IP X i

20 20

0 0

( ) ( ) | ) ( ) ( ) ( ) ( ) ( )

1 1( ) ( ) [ ] ( ) ( ) (**)20 400 400

= =

= = = = = = = = = = = +

−= = = + = + = =

∑ ∑ ∑

∑ ∑ ∑3

jm

i i m i jjm

i i m i j

i m mIP Y j m m i i j i

m m i i m i j

1 20

1 11 20

1 1

1 1 1 1 1( ) ( ( 1) (20 ) (20 )[ ] ( )[ ]400 40 400 20 400 20 400

1 (10 ( 1) (20 )( 1) (20 )[ (20 ) ( 1)])160000

= = + =

= = + =

−= = + + + + + + + − + =

+ + + − + + + + − =

∑ ∑ ∑

∑ ∑ ∑ j j j j m m m2 2 31 1 21 1(10(1 )( 82) (3 249 2708) ).

160000 6 2 6− − − − + − −

ORM

Page 18: Aenorm 58

18 AENORM 58 January 2008

mations (**) gives

2 2 3

1( ) (10(1 )( 82)160000

1 21 1(3 249 2708) ).6 2 6

IP Y j j j

j j m m m

= = − − −

− + − −3

Now we consider the case m ≥ j - 1. The other necessary split of (**) gives

2

3

1 2209( ) ( 11160000 6

1 1379 (21 ) ( 1)).6 2

IP Y j j j

j j m m

= = + −

− + − + 4

Finally, we need to calculate the probability that player B obtains a score 0.

2

120

1

1( 0) ( ( 1)320000

(20 ) ( 1)).

m

i

i m

IP Y i i

m i i

=

= +

= = − +

+ +

∑ 5

Some algebra leads to

2 3 4

1 18397( 0) (61600320000 6

251 43 1 ).12 6 12

IP Y m

m m m

= = +

− − −

We resume the results.

2 3 4

2 3

3

2 3

( )1 13897(61600

320000 6 0,251 43 1 )12 6 12

1 2209 1( 11 1,...,160000 6 61,1379 (21 ) ( 1))

21 1(10(1 )( 82) ( 2,160000 6

21 1 ...,20.249 2708) )2 6

IP Y j

mj

m m m

j j j j m j m m

j j j j m j m m m

= =

⎧ + −=

− −

+ − − =+

+ − +

− − − − = +

+ − −

⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎩

To check these answers it is very useful to sim-ulate the game and compare the answers to the simulated values. In Table 2 we compare IP(Xm = i), i = 0, 1, ..., 20, and IP(Y = j), j = 0, 1, ..., 20 to simulated values for these prob-abilities where we simulated 1 million times the two-player game. Player A always follows his optimal m-strategy, so m = 10. In these 1 million simulations player A has won 482113 times, and we see that this is in agreement with the calculated probability 7719/16000 = 0.482.

The three-player game

Next we consider the same game for three play-ers, A, B and C [n = 3]. Again the problem is to determine the optimal m-strategy for player A. But first we consider the optimal strategy for player B. Player B knows the score of player A, say a. Some thinking tells us that B should fol-low a q-strategy, where q = max {a,m*} and m* = 10 the value we found above for the optimal strategy for player A in the two-player case. Finally we consider the strategy for player C. He knows the scores of the players A and B, say a and b. Then player C will follow a r-strat-egy, where r = max{a,b}. We will assume that player A follows a m-strategy and we look for the value of m that will maximize his probabil-ity of winning. The random variable Xm is the individual score of a player following a m-strat-egy. Let Y be the score of player B and Z the score of player C [naturally, the probability dis-tributions of Y and Z also depend on m, but we suppress this in the notation]. Now, for player B the conditional probability distribution of Y given Xm = a is equal to the probability distribu-tion of Xmax{a,m*}. Furthermore, the conditional probability distribution of Z given Y = b and Xm = a is equal to the probability distribution of Xmax{a,b}. Again, let be the probability that player A wins when he plays the m-strategy. Then we get, again by splitting according to the partitioning events {Xm = i},

*

20

max{ , }0 0 0

max{ , }

( ) ( )

( ) ( )

i i

i ji j k

mi m

α m IP X k

IP X j IP X i= = =

= =

= =

∑∑∑6

For clarity we first calculate the inner summa-

4 m m

i i j i m

IP Y j m m i i j i j m

j j j j m m

20

1 1

2 3

1( ) (10 ( 1) (20 )( 1) ( 1)( 1) ( 1)(20 ))160000

1 2209 1 1( 11 379 (21 ) ( 1)).160000 6 6 2

= = = +

= = + + + − + − − + − + =

+ − − + − +

∑ ∑ ∑

5 m

i m mi i i

m

i m i i m

iIP Y IP X IP X i i i IP X i i i

m i i i i m i i

20 20

1 1 120 20

2

1 1 1

1 1 1( 0) ( 0) ( ) ( 1) ( ) ( ( 1)800 800 400

1 1( 1)[ ]) ( ( 1) (20 ) ( 1)).20 400 320000

= = =

= + = = +

−= = = = = + = = + +

+ + = − + + +

∑ ∑ ∑

∑ ∑ ∑

ORM

Page 19: Aenorm 58

19

Voor onze vestigingen in Amsterdam, Purmerend,Rotterdam en Zwolle zoeken wij actuarieelgeschoolde professionals met relevante werk -ervaring (2-5 jaar) voor de functie van specialist.

Heb jij de ambitie om complexe actuariële pensioen-vraagstukken op een ondernemende en creatieve manierop te lossen en al doende je het vak eigen te maken?Wil jij van je collega-specialisten het adviesvak leren omdaarna snel door te groeien tot een zelfstandig adviseurvan de klant? Kijk dan op www.aon.nl (onder vacatures)voor meer informatie of bel de heer Rajish Sagoenie,Executive Director, op telefoonnummer 020 430 53 93.

Aon Consulting is wereldwijd de op twee na grootste risico-adviseur op het gebied van arbeidsvoorwaardenen verleent in Nederland adviesdiensten aan (beurs genoteerde) ondernemingen en pensioenfondsen.Actuarial Services biedt adviezen en praktische ondersteuning op het gebied van pensioenen. Het dienstenportfolio strekt zich uit van strategischebeleidsadvisering, pensioen advies en administratie toten met procesbegeleiding en tijdelijke ondersteuningbij bijvoorbeeld implementaties en detachering. AonConsulting hanteert een eigen stijl waarin professiona-liteit, prestatie, persoonlijke aandacht en passie centraalstaan voor de medewerker en de klant..

Aon biedt mij een open, professionele omgeving, waarin ik

mijzelf vakinhoudelijk en persoonlijk kan blijven ontwikkelen

4389_1

R i s i c o m a n a g e m e n t • E m p l o y e e B e n e f i t s • V e r z e k e r i n g e n

Page 20: Aenorm 58

20 AENORM 58 January 2008

i IP(Xm=i) simulation j IP(Y = j) simulation0 0.1375 0.1376 0 0.2568 0.25741 0 0 1 0.0069 0.00682 0.0025 0.0025 2 0.0090 0.00903 0.0050 0.0049 3 0.0113 0.01134 0.0075 0.0075 4 0.0137 0.01365 0.0100 0.0099 5 0.0162 0.01616 0.0125 0.0125 6 0.0188 0.01897 0.0150 0.0151 7 0.0216 0.02168 0.0175 0.0173 8 0.0244 0.02449 0.0200 0.0199 9 0.0273 0.027310 0.0225 0.0222 10 0.0303 0.030111 0.0750 0.0750 11 0.0333 0.033512 0.0750 0.0752 12 0.0389 0.039113 0.0750 0.0748 13 0.0444 0.044214 0.0750 0.0749 14 0.0496 0.049615 0.0750 0.0751 15 0.0547 0.054516 0.0750 0.0753 16 0.0596 0.059217 0.0750 0.0749 17 0.0643 0.064518 0.0750 0.0753 18 0.0688 0.068519 0.0750 0.0751 19 0.0731 0.073120 0.0750 0.0749 20 0.0772 0.0775

Table 2: Calculated and simulated probabilities for the scores of player A and player B.

tion, using the probability distribution of Xm [see first page]. Introduce Mij := max{i, j}.

0

1

1( ) ( 1)800

1 1 [ ( 1) ( 1)].400 800

ij

i

M ij ijk

i

ij ijk

IP X k M M

k M M i i

=

=

= = + +

−= + + −

Now we get

* *

20

0

1( ) ( )[( ( 1) ( 1)]*800

1[ [ ( 1) ( 1)]]800

mi

im im

α m IP X i i i i i

M M i i

=

= = − + +

+ + −

∑7

6 m m m m m m

i ii i i

m m m m i ji j i j i j

α m IP X Y X Z IP X Y X Z X i IP Z i Y i X i

IP Z i Y j X i IP Z i Y j X i IP Y j X i IP X i IP X i

20 20

0 020 20 20

max{ , }0 0 0 0 0 0

( ) ( ; ) ( ; ; ) ( ; ; )

( ; ; ) ( | ; ) ( | ) ( ) ( )*

= =

= = = = = =

= ≥ ≥ = ≥ ≥ = = ≤ ≤ = =

≤ = = = ≤ = = = = = = ≤

∑ ∑

∑∑ ∑∑ ∑∑i i

m i j mi m i mi j k

IP X j IP X i IP X k IP X j IP X i* *

20

max{ , }max{ , } max{ , }0 0 0

( ) ( ) ( ) ( ) ( )= = =

= = = = = =∑∑∑7

im

i

m M mi j i

α m IP X i i i i i IP X j IP X i i i i i

M

*

20 20

0 0 0

1 1( ) ( ) [ ( 1) ( 1)] ( ) ( )[( ( 1) ( 1)]*800 800

1[ [800

= = =

= = + + − = = = − + +∑ ∑ ∑

im imM i i* *( 1) ( 1)]].+ + −

To be able to continue, we split the summations over i in ∑i≤m* and ∑i>m*. We get

*

*

2 * *

020

4

1

1( ) ( ( )2 [ ( 1)640000

( 1)] ( )4 ).

m

mi

mi m

α m IP X i i m m

i i IP X i i

=

= +

= = + +

− + =

∑ 8

To be able to do the further calculations we need to know whether here [i.e. in the three-player case] the optimal value for m for player A will be greater or less than m* [the optimal value for the two-player case]. Intuitively we expect that the optimal m we are looking for will be greater than m*, because player A has

ORM ORM

Page 21: Aenorm 58

21

more opponents in this case. He will thus try to reach a higher score than in the two-player case.Therefore we will now continue the calculations assuming that the m we are looking for is great-er than the m* = 10 we found earlier [of course the formula for can also be calculated for m < m* but this is omitted]. From now on we will insert the value m* = 10 in calculations and assume m > 10. We continue as following

α m m

m m m

m m

2 3 4

5 6

1 724481 7226667( ) (640000 5 1000

1 7 52000 100 48

21 1 ).500 3000

= + −

− − −

−9

We see that , the probability that player A wins when he follows a m-strategy, is a sixth-degree polynomial in m. Now we can find m** := arg max{ : m = 1, 2, ..., 20} by calculating the expression for the different values of m > 10. The result is m** = 13 with correspond-ing probability = 4370213/12800000 ≈ 0.341. In Table 3 we show all the winning prob-abilities for player A for m ≥ 10.Just as in the two-player case we can deter-mine the probability distribution of Y, the score of player B. We start with calculating the prob-ability that player B obtains a score j ≠ 0. As before we do this by splitting according to and conditioning on the events {Xm = i}.

m

mmi

i mi m

IP Y j IP X j IP X i

IP X j IP X i

*

*

*

020

1

( ) ( ) ( )

( ) ( ).

=

= +

= = = = +

= =

∑ 10

At this point we need to distinguish three cas-es; j ≤ m*, m* < j ≤ m* and j > m. We start with j ≤ m*

jIP Y j m m

m m m

m j

1 1( ) [ ( 1)400 8001 ( 1) (20 )*

8001 1( )] .20 400 400

−= = + +

− + −

−+ =

11

Now the case m* < j ≤ m*

IP Y j m m m m

m m m m

mj m j j

3

2

* *

* * 2

3 2

1 1 1( ) [ ( 1)8000 40 301 1 21 2120 12 40 40

1 1 11( 3)40 120 20

2209 379].120 20

= = + + +

+ + + +

− − + +

−12

Now, for j > m

8 m

m mi i m

m

m mi i

α m IP X i i m m i i IP X i i i i i i

IP X i i m m i i IP X i i

*

*

*

202 * * 2

0 1

2 * * 4

0

1 1 1( ) ( ( )2 [ [ ( 1) ( 1)]] ( )2 [ [ ( 1) ( 1)]]800 800 800

1 ( ( )2 [ ( 1) ( 1)] ( )4 ).640000

= = +

=

= = + + − + = + + − =

= + + − + =

∑ ∑

∑m*

20

1= +∑

9 m m

m m mi i mi m

m

i i i m

α m IP X i i m m i i IP X i i IP X i i

i i m i i i i i

*

*

202 * * 4 4

0 1110 20

2 4 4

1 11 1

1( ) ( ( )2 [ ( 1) ( 1)] ( )4 ( )4 )640000

1 1 1 1( 2 [110 ( 1)] 4 [ ]4 )640000 400 400 20 400

164

= = += +

= = = +

= = + + − + = + = =

− −+ − + + + =

∑ ∑ ∑

∑ ∑ ∑m

i i i i m

mi i i i i i i

m m m m m m

10 10 202 3 4 4

1 1 11 12 3 4 5 6

11 1 1 20( ( 1) ( 1) ( 1) )0000 20 200 100 100

1 724481 7226667 1 7 5 21 1( ).640000 5 1000 2000 100 48 500 3000

= = = = +

+− + − + − + =

+ − − − − −

∑ ∑ ∑ ∑

10 m

m m m mi i i m

m

m m i mmi i m

IP Y j IP Y j X i IP Y j X i IP X i IP Y j X i

IP X i IP X j IP X i IP X j IP X i

*

*

*

*

*

20 20

0 0 120

0 1

( ) ( | ) ( | ) ( ) ( | )*

( ) ( ) ( ) ( ) ( ).

= = = +

= = +

= = = = = = = = + = =

= = = = + = =

∑ ∑ ∑

∑ ∑

ORM

Page 22: Aenorm 58

22 AENORM 58 January 2008

IP Y j m m m

m m m

m m m

mj j j j

3 2

*

* * *

3 2

2

1 1( ) [ ( 1)8000 40

1 1 1120 40 60

1 21 677120 40 30

1 1 83( 83) 41].40 2 2

= = + +

− + −

− − −

− − + − 13

Finally, for = 0 we obtain

IP Y m m m m

m m m m

m m m

m

4 3 2

* *

* * * *

4 3 2

1( 0) [ ( 1) ( 1)640000

1 12 2

1 43 2516 3 6

18397 123200].3

= = + + +

− − + −

− − +

+14

We resume the results.

10 10575891/ 32000000

16 5094419/ 16000000

11 10770827/ 32000000

17 19102553/ 16000000

12 2178889/ 6400000

18 8646677/ 32000000

13 4370213/ 12800000

19 7423467/ 32000000

14 5419431/ 16000000

20 5823467/ 32000000

15 10604789/ 32000000

11 m m

i i mi mm

i i m

j j i j i j m jIP Y j m m m m

m j m i m m m m m

*

*

20

1 1120

1 1

1 1 1 1 1 1 1 1 1 1( ) ( 1) ( ) [ ( 1)400 800 400 400 400 400 400 20 400 400 800

1 1 1 1 1 1( 1) ( )] [ ( 1) ( 1) (20 )(400 20 400 400 800 800 20

= = += +

= = +

− − − − − − −= = + + + + + = + +

−− + + = + + − + − +

∑ ∑ ∑

∑ ∑ j 1)] .400 400

−=

12 jm m

i i ji mjm

i m i i m

m m i i i j iIP Y j m m

j m m mm m i i i

*

*

*

*

1* *

1 1120 * *

1 1 1

1 1 1 1 1 1 1 1( ) ( ) ( 1) ( ) ( )20 400 800 20 400 400 20 400 400 400 400

1 1 1 1 1 1 1( ) ( ) ( 1) ( ) ( 1) ( 1)400 20 400 20 400 800 400 20 400 160000

= == +

= + = = +

− − − −= = + + + + + + + +

−+ = + + + + − + − +

∑ ∑ ∑

∑ ∑ ∑m

i j

j j m i m m m m m m

m m m mj m j j

3 2* * *

* 2 3 2

1 1 1 1 1 1 1( 1) (20 ) ( ) [ ( 1)160000 400 20 400 8000 40 30 20

1 21 21 1 1 11 2209 379( 3) ]12 40 40 40 120 20 120 20

=

− −− + − + = + + + +

+ + + − − + + −

∑.

13j jm

i i mi mm

i j i i m

m m i i i i mIP Y j m m

j m m mm m i i i

*

*

*

*

1 1* *

1 1120 * *

1 1

1 1 1 1 1 1 1 1( ) ( ) ( 1) ( ) ( ) ( )( )20 400 800 20 400 400 20 400 400 20 400 20 400

1 1 1 1 1 1 1( ) ( ) ( 1) ( ) ( 1) ( 1)400 20 400 20 400 800 400 20 400 8000

− −

= = += +

= = = +

− −= = + + + + + + + + + +

−+ = + + + + − + −

∑ ∑ ∑

∑ ∑m m

i mj

i m

i i

m m j mj m i j m m m m

m m m

*

3

2

11

* *

1* * 3

1 ( 1)160000

1 1 1 1 1 1 1 1 1( 1) ( ) ( ) (20 1)* ( ) [ ( 1)20 20 400 400 20 400 400 20 400 8000 40 120

1 1 1 2140 60 120 40

= +−

= +

+ − +

−− − + + + + − + + = + + −

+ − −

∑ ∑

∑m m mj j j j2 2677 1 1 83( 83) 41].

30 40 2 2− − − − + −

14 m m

i i mi mm m

i i m

i iIP Y m m m m m m i i i i

m mm m m m m m i i i i

*

*

*

*

20* * * *

1 11

* * * *

1 1

1 1 1 1 1 1 1( 0) ( 1) ( 1) ( 1) ( 1) ( 1)*800 800 800 400 800 400 800

1 1 1 1 1 1( ) ( 1) ( 1) ( 1) ( 1) ( 1)( 1) (20 400 640000 320000 320000 800 20 400

= = += +

= = +

− −= = + + + + + + + +

+ = + + + + − + + − + +

∑ ∑ ∑

∑ ∑

i m

i i m m m m m m m m m m m m4 3 2

20* * * * * * 4 3 2

1

)*

1 1 1 1 43 251 18397( 1) [ ( 1) ( 1) 123200].640000 2 2 6 3 6 3

= +

+ = + + + − − + − − − + +∑

ORM

Page 23: Aenorm 58

23

IP Y j

m m

m m m

m m m j

m m m

m

jj m

m m m

m m m

m m mj

m j j

4

3 2

3 2

* *

*

* * *

4 3 2

*

*

* * *

2

3 2

( )1 [ ( 1)*

6400001( 1)2

1 0,2

1 43 2516 3 618397 123200]

31,...,1

400 ,1 1[ ( 1)

8000 401 1 130 20 1221 21 1 *40 40 40

1 11( 3)120 20

2209 379]120 20

= =

+

+ + −

− + − =

− − +

+

=−

+ +

+ + +

+ +

− − + +

j m m

m m m

m m m j m m m m

mj j j j

3 2

*

*

* * *

3 2

2

1,..., ,

.

1 1[ ( 1)8000 401 1 1

1,120 40 601 21 677 ...,20.

120 40 301 1 83( 83) 41].40 2 2

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪ = +⎪⎪⎪⎪⎪⎪⎪⎪⎪

+ +⎪⎪⎪

− + −⎪ = +⎪⎪ − − −⎪⎪⎪ − − + −⎪⎩

Again, we compare these results with simulation results. We simulate the game 1 million times, where all players play their optimal strategy. This means m* = 10 and m** = 13. In 1 million simulations player A has won 341327 times, which is again in good agreement with the the-oretical probability of 4370213 / 12800000 = 0.341.Now the final issue is to calculate the ty

= =

= = = = =∑∑20 20

1 1

( ) ( ; ; )mi j

IP Z k IP X i Y j Z k

More than three players

j

Table 4: Probability distribution of the score of player B in the three-player game.

ORM

Page 24: Aenorm 58

24 AENORM 58 January 2008

m** = 13 and a is the score of player A. Player C knows the scores of players A and B, say a and b, but also has to keep in mind that player D will play the game last. This means that play-er C will follow a r-strategy, where r = max{a, b,m*} and m* = 10. Finally player D will follow a s-strategy, where s = max{a,b,c}. We try dif-ferent values for m and discover [by simula-tion!] that the probability of winning for player A is optimal when m*** = 14. This gives a win-ning probability for player A of ≈ 0.2715. See Table 6 for the winning probabilities for the other values of m. We see from Table 6 that the winning probabilities for player A do not differ very much for m = 13, 14, 15. So, around the optimal value for m the function

is rather flat.

m m

13 0.2692 17 0.2499

14 0.2715 18 0.2278

15 0.2698 19 0.1938

16 0.2637 20 0.1441

Table 6: Simulations for the winning probability of player A for different values of m.

For the five-player case we do the same thing. Here every player is again playing his optimal strategy and we determine the optimal m-strat-egy for player A. Now player B is using the opti-mal value of m from the four-player case in his optimal strategy. Different values of m give the probabilities of winning for player A that appear in table 7. The highest probability ap-pears where m = 15. This would be the optimal m for player A. This gives a probability of win-ning ≈ 0.2291.

From the previous simulations for the four- and five-player cases it becomes clear that for high-er numbers of players, the optimal value for m for player A increases too.

Conclusion

In this paper we have analyzed a simple fortune-

wheel game for n players in which each player can increase his score by turning the wheel a second time. Turning a second time can result in a zero score. This asks for finding an optimal threshold strategy for each player which maxi-mizes his probability to win. For the case of two and three players we have presented a detailed

probabilistic analysis. The optimal strategy for the first player has been calculated, and also the probability distributions of the score of the second player have been derived. The math-ematical results have been confirmed by simu-lations. For the case of four and five players we have only presented simulations. All the re-sults confirm the conjecture that the optimal threshold for the first player is increasing in the number of players, whereas the maximum win-ning probability of the first player is decreas-ing in the total number of players. The paper also illustrates the fruitful cooperation between probabilistic calculations and stochastic simu-lations: mathematical results can be checked by simulations, and simulations can give a bet-ter insight in the characteristics of a problem when mathematical results are (still) lacking. So, simulation results can be a guide for further understanding.

"The optimal threshold for the first player is increasing in the number of players, whereas the maximum winning probability of the first player is decreasing in the total number of

players."

ORM

m m

14 0.2281 18 0.2008

15 0.2291 19 0.1704

16 0.2267 20 0.1938

17 0.2193

Table 7: Simulations for the winning probability of player A for different values of m.

Page 25: Aenorm 58

25

Youbaraj Paudel

was born in Nepal and came to the Netherlands in 2000. He joined VASVU in 2001 and in 2002 he started his study Business Mathematics and Informatics (BMI) at the Vrije Universiteit Amsterdam. He obtained his master’s degree in July 2007. At this moment he works for Watson Wyatt Worldwide in the Insurance Practice. His interests are among others financial risk management and life insurance.

Quantifying Operational Risk in Insurance Companies

The main objective of this paper is to illustrate how financial institutions including insurance companies can quantify Operational Risk (OR) that meets the regulatory conditions. The lack of loss data is one of the main obstacles for quantifying OR. We briefly discuss the Loss Distribution Approach (LDA) and Bayesian Approach (BA) to estimate the OR losses. In conclusion we will provide some OR estimation results provided by our model and discuss them briefly.1

1 The views expressed in this paper are my own, and not necessarily reflects the opinions of Watson Wyatt Worldwide or its members.2 This topic is discussed in more detail in my internship paper.

Introduction

The financial risk within financial institutions consists of various components like Credit Risk, Market Risk, Business Risk and other types of risks such as Operational Risk (OR). There is a growing pressure from prudential authorities on the management and measurement of OR by both insurance and banking institutions. Both Basel II and Solvency II encourage the banking and insurance sector respectively to measure their OR losses based on an internal model.

The OR is mainly based on the organisational management of internal processes. As internal processes vary within companies a company-specific OR model for estimation and manage-ment is required. Therefore, the statistical mod-els used by different companies are required to reflect company-specific factors. Contrary to other risks, OR is managed by means of chang-ing existing processes such as technology, or-ganisation and people. To be able to manage these, it is necessary to have a lot of company specific information available. Quantifying the informative and qualitative operational data is complex and, even if it is possible, the uncer-tainty factor is very high. This makes it difficult to quantify and express OR in numbers.We observed that the OR quantifying models

developed within the banking sector are mostly based on either scenario-based or curve fitting techniques, and sometimes on the combination of both. The main reason for using this method is that the companies do not want to rely com-pletely on historical data, since that might fail to represent the new emerging risk events in a rapidly changing market. Moreover, many com-panies do not yet have an internal database of historical loss records. The dynamic underlying causes of OR need a progressive and interac-tive approach to risk management where all different kinds of information like expert opin-ion, internal and external historical data, key indicators and other informative data as well as quantitative data might be involved. As this method is thought to be conceptually sound and, if implemented with integrity, it would be recognised under Solvency/Basel II for Advanced Measurement Approach (AMA) status and under Solvency II for an internal model.

The OR estimation model we developed is based on two quantification, the Loss Distribution and Bayesian approach, methods and is discussed in the following chapter. For more detailed in-formation, the reader is referred to my intern-ship paper ‘Quantifying operational Risk within Insurance industry’ July 2007.

OR Estimation

Before we start modelling the OR losses using some statistical methods we should realise that the quality of the outputs of such models de-pend on the quality of the input. Therefore, the first part – the identification/classification of risk variables is very crucial work to be done to provide optimal results. Sound Operational Risk Management consists of the following steps2:

Actuarial Sciences

Page 26: Aenorm 58

26 AENORM 58 January 2008

-Objective setting;-Risk identification and classification;-Qualitative/quantitative data collection and

data cleaning;-Risk measurement;-Risk control monitoring and mitigation proc-

ess;-System infrastructure.

A sound Operational Risk management system can clearly analyse the cause-effect relation-ship in detail. The causes should be defined in various levels so that even the smallest details can be incorporated in the model. Figure 1 de-fines the different steps to connect the specific causes with their respective consequences.

Figure 1: Classification: cause, events and consequences for OR

The entire OR management framework is not discussed in this article. My internship paper can be consult for more details on this topic.

Loss Distribution Approach (LDA)

LDA refers to a statistical actuarial bottom-up approach for modelling OR events losses and their impact. In this chapter we provide a brief description of how this method should be im-plemented in practice and how quantitative and qualitative should be combined to provide in-puts for the model. Here follows a brief descrip-tion.

An insurance company offers various types of products, and they belong to a specific line of business3 i, where i = 1,..,n. Also, different risk

types4 are defined j, where j =1…m. The prob-ability function for number of losses (loss fre-quency N(i,j)) is given by pi,j and the overall loss frequency distribution is then Pi,j(n), which gives:

, ,0

( ) ( )s n

i j i js

P n p s=

=

= ∑ (1)

As there is not enough loss data, it is neces-sary to use the Monte Carlo simulation to com-pensate for this. Therefore we use different frequency fitting models like: Poisson, Gamma and Negative Binomial5.Then the total loss, for each business line i and event type j in the given time interval [t, t + ] is given by:

( , )

0

( , ) ( , )N i j

nn

θ i j ψ i j=

= ∑ (2)

Where denotes the loss amount for event j and business line i.

Figure 2: Loss Frequency/Severity for event j and line of business

3 For instance, the different lines of business are life/non-life/health insurance within the insurance industry.4 The types of risks I include in my model are the same as introduced by Basel II. These are:5 In my internship paper you can find more information about how the parameters for the fitting models concerned are estimated and which inverse functions are used. For detail about how the various parameters for different fitting models are estimated and which inverse functions are used to draw random numbers, you are advised to consult the abovementioned paper.

Actuarial Sciences

Page 27: Aenorm 58

27

Then the compound aggregated loss capital is:

,, 1

,

( ) ( , ) 0( )

(0) 0

i j ni j n

i j

p n *Ψ i j if xG x

p if x

=

⎧⎪ >⎪= ⎨⎪

=⎪⎩

∑ (3)

F is the severity function and n ni jF x x i j, ( ) ( ; , )θ= 6

denotes the severity amount x for line of busi-ness i and event j for nth occurrence (frequen-cy).

Figure 3: Aggregated capital

The last step in LDA is the calculation of the capital charge, which is a VaR measure of risk. We can calculate this amount in two ways: at overall level or at business level. Calculation under the overall level may include the correla-tion information matrix that could arise through the risk event’s dependency. For that reason, a correlation matrix should be integrated in the model to include that dependency in overall capital [1]7. Capital calculation at business level is done by estimating the losses at each busi-ness level and then summing them up.

Theoretically, the overall capital is the sum of expected (EL) and unexpected (UL) OR losses. Based on the previous calculation we can define the EL and UL as following:

( , )

0

( , ) ( , )N i j

nn

EL i j ψ i j=

⎡ ⎤⎢ ⎥= Ε⎢ ⎥⎣ ⎦∑ (4)

Equation (4) denotes the expected losses that is caused by event type j with frequency n emerged in business line i. And the unexpected losses that may arise from OR loss events are given by:

1,( , ; ) ( ) ( , )i jUL i j x G x θ i j−⎡ ⎤= −⎣ ⎦ (5)

Where x denotes the amount that lies at some specific quintile (i.e. 99.5%) level, also referred to as .Writing the equation in open form we get:

,( , ; ) [inf{ | ( ) } ( , )]i jUL i j α x G x α EL i j= ≥ − (6)

Now, it is possible to calculate the required cap-ital as defined by Solvency II by using all these ingredients mentioned above.

( , ; ) ( , ) ( , ; )RC i j α EL i j UL i j α= + (7)8

This is equal to:

−= 1, ( )i jRC G x (8)

Where x is equal to some predefined threshold (i.e. 99.5%).

As shown in the previous section, the LDA ap-proach needs appropriate data fitting models. To determine which model to use for modelling frequency and severity the characteristics of each risk event type should first be precisely understood. Bayesian Approach

Most traditional and classical approaches to es-timate occurrence rates and their impact are mainly based on historical data. If sufficient his-torical data is available, the methods discussed in the previous chapter can be used to provide optimal results, as they possess the asymptotic properties. With sufficient data, representative parameters can be estimated with a large de-gree of certainty. However, this is not possible if not much data is available. Since OR losses

Data available Statistical Approach Prior Inference Empirical Inference

Internal Data only General LDA None None

Expert Data only General LDA None None

External Data only General LDA None None

Internal + External Data Bayesian inference External Data Internal Data

External Data + Expert Opinion

Bayesian inference External Data Expert Data

Expert Opinion Internal + External Data

Bayesian inference External Data Internal Data + Expert Opinion

Table 1: Different alternatives for data manipulation applying different approaches

6 This severity function is a function of losses . 7 For technical background about the dependency/independency among the loss events we refer the interested reader to the paper “Coherent Measures of Risks An Exposition for the Lay Actuary” by Glen Meyers.8 Actually, we can calculate the RC as stated in Basel II, which is equal to: = + EL(i,j).

Actuarial Sciences

Page 28: Aenorm 58

28 AENORM 58 January 2008

Hoeveel is er nodig om onze pensioenen in de toe-

komst te kunnen betalen? Rekening houdend met

de vergrijzing en de economische ontwikkelingen?

Kunnen we straks nog steeds zorgeloos een potje

biljarten? Bij Watson Wyatt kijken we verder dan de

cijfers. Want cijfers hebben betrekking op mensen.

En op maatschappelijke ontwikkelingen. Dat maakt

ons werk zo interessant en afwisselend. Watson

Wyatt adviseert ondernemingen en organisaties

wereldwijd op het gebied van ‘mens en kapitaal’:

pensioenen, beloningsstructuren, verzekeringen en

investeringsstrategieën. We werken voor toonaan-

gevende bedrijven, waarmee we een hechte relatie

opbouwen om tot de beste oplossingen te komen.

Onze manier van werken is open, gedreven en infor-

meel. We zijn op zoek naar startende en ervaren

medewerkers, bij voorkeur met een opleiding Actu-

ariaat, Econometrie of (toegepaste) Wiskunde. Kijk

voor meer informatie op werkenbijwatsonwyatt.nl.

Watson Wyatt. Zet je aan het denken.

T o e t s h e t a a n w e z i g e v e r m o g e nv a n e e n p e n s i o e n f o n d s o m d e i n d e x a t i ev o o r g e p e n s i o n e e r d e n t e b e p a l e n .

-00013_210x297_Biljart.indd 1 11-09-2007 10:41:17

Page 29: Aenorm 58

29

Hoeveel is er nodig om onze pensioenen in de toe-

komst te kunnen betalen? Rekening houdend met

de vergrijzing en de economische ontwikkelingen?

Kunnen we straks nog steeds zorgeloos een potje

biljarten? Bij Watson Wyatt kijken we verder dan de

cijfers. Want cijfers hebben betrekking op mensen.

En op maatschappelijke ontwikkelingen. Dat maakt

ons werk zo interessant en afwisselend. Watson

Wyatt adviseert ondernemingen en organisaties

wereldwijd op het gebied van ‘mens en kapitaal’:

pensioenen, beloningsstructuren, verzekeringen en

investeringsstrategieën. We werken voor toonaan-

gevende bedrijven, waarmee we een hechte relatie

opbouwen om tot de beste oplossingen te komen.

Onze manier van werken is open, gedreven en infor-

meel. We zijn op zoek naar startende en ervaren

medewerkers, bij voorkeur met een opleiding Actu-

ariaat, Econometrie of (toegepaste) Wiskunde. Kijk

voor meer informatie op werkenbijwatsonwyatt.nl.

Watson Wyatt. Zet je aan het denken.

T o e t s h e t a a n w e z i g e v e r m o g e nv a n e e n p e n s i o e n f o n d s o m d e i n d e x a t i ev o o r g e p e n s i o n e e r d e n t e b e p a l e n .

-00013_210x297_Biljart.indd 1 11-09-2007 10:41:17

are rare and asymmetric, it is quite difficult to estimate these based on only a few historical data [2]. For this approach we have three kinds of loss data: Internal, External and Expert or subjective data. We use the external loss data as posterior information, while the internal and expert data are modelled as prior information. The Bayesian method is based on a data-com-bination approach and integrates SBA and SMA in it to estimate OR losses for different events [3]. The Bayesian approach can mainly be used in the following situations: there is no or not much loss data, it is necessary to include sub-jective information in results, and the prior and posterior information is consistent.With these three types of data we have the dif-ferent possibilities either for LDA or for Bayesian inference.

By nature, the Bayesian method automati-cally considers the uncertainty associated with various estimation parameters of a probability model that originated from different informa-tion sources. Hence, the Bayesian method is currently mainly recommended as an appropri-ate manner to mix qualitative and quantitative data and information, such as expert opinion, internal and external data. At the same time we have to consider some shortcomings of this approach, i.e. generally it produced extreme-ly overconfident results and has technical and computational limitations and it is generally known that this method overuses the averaging and aggregating information.

The application of the Bayesian approach is based on the so-called Bayesian theorem. This connects the conditional9 and marginal10 prob-ability distribution of random variables. The Bayesian theorem tells us how to update or re-vise viewpoints in light of new evidence. For two stochastic events, A and B, respectively having the probability P(A) ≠ 0 and P(B) ≠ 0, the defi-nition of the conditional probability A giving B is given by [4]:

( )( | )( )

P A BP A BP B∩

= (9)

Equation (9) can also be written in a different way, and this provides us with:

( | ) ( )( | )( )

P B A P AP A BP B

= (10)

Equation (10) literally states11: Posterior infor-mation = (likelihood * prior) / normalizing con-stant.We will now translate the above mentioned Bayesian concept into a Bayesian computation-al approach for OR loss estimation [4]. Using the same Bayesian theorem we can write a con-ditional probability distribution of B given the observed data A as following:

( | ) ( )( | )( )

P A B P BP B AP A

= (11)

The P(B) (i.e. internal loss data or other sub-jective information) represents information that is known without knowledge of data (i.e. external data). Therefore, P(B) is called prior information and the distribution of B a priori. The P(A) represents the distribution function for prior information based on available data and is also known as a priori. The core point of the Bayesian theorem, subsequent distribution based on these two types of data is then given by equation (9) and (10). As equation (11) now depends on the given data B, this is no longer the function of A, but of B. From that point on-wards we call this function the ‘Likelihood’ func-tion of B for giving historical data A (internal or external data). Now, we can write equation (11) as [4]:

( ) ( )( | ) |p B A l B A p B= (12)12

This implies:

Internal DataFrequency + Severity

Data Horizon (yrs) 5,000 5,000 5,000 5,000 5,000 5,000 5,000

Loss reported: how many times 9,000 9,000 9,000 9,000 9,000 9,000 9,000

Standard Deviation Frequency 0,500 0,200 1,000 3,000 2,100 2,000 2,000

Mean Loss Severity 12,000 12,000 12,000 12,000 12,000 12,000 12,000

SD Loss Severity 2,500 2,500 2,500 2,500 2,500 2,500 2,500

Table 2: Input Internal loss data for the OR estimation model

9 P(A | B) is a conditional probability, and also to mentioned as likelihood (L(A | B)) of A given event B. 10 P(A) and P(B) are the marginal probabilities.11 Most Bayesian based calculations have been done in a numerical way as it is too complex for an analytical calculation. Based on the type of data it is possible to use either a continuous or a discrete form of this approach. The following expression can be used for these two forms: p(A) = E[p(A | B)] which equals if B is continuos and ∑ p(A | B)p(B) if B is discrete. In this case E[p(A | B)] denotes the mathematical expression for expectation of p(A | B) with respect to function p(B).12 The Bayesian theorem tells us that the probability distribution for B posterior to the data A is proportional to the product of the distribution for B prior to the data and the likelihood for B given A.

Actuarial Sciences

Page 30: Aenorm 58

30 AENORM 58 January 2008

Posterior Distribution likelihood*prior distribution

However, estimating the parameters for both prior and posterior distribution is not an easy task. In this article we do not discuss the pa-rameters estimation techniques.

Results

In this chapter we provide some results gener-ated by above mentioned model. The OR loss estimation model concerned was written in Excel using VBA-language. As there are several combinations in this model, it is not possible to discuss all of them in this paper. Therefore, we discuss here only one alternative in which only the internal loss data serves as input and LDA for modelling purpose.

We have used the inputs from table 2 to provide the following results:

Severity Frequency E(X)

Lognormal Poisson ± 34

Lognormal Neg-Binomial ± 48

Pareto Poisson ± 38

Pareto Neg-Binomial ± 75

Normal Poisson ± 25

Normal Neg-Binomial ± 30

Table 3: Expected loss Severity with different fitting models.

The results in table 3 describe the differences among the different alternatives for OR estima-tion. As might be expected, the loss data gener-ated by the combination of normal and Poisson distribution for loss severity and frequency re-spectively is the lowest one. As both distribu-tions have a symmetric and smooth character and Poisson is one parametric, the difference in result is quite easy to describe. We can also ob-serve that the loss severity generated through the combination of Pareto and Negative-bino-mial provides the largest loss amount. Since both fitting models are defined by two param-eters and have a fat tail, the results should ap-parently be higher in this case than in other cases.

Bibliography

[1] Franchot, A., Roncalli, T. and Salomon, E. (2003). “The Correlations Problem in Operational Risk”.

[2] Quigley , J., Bedford, T. and Walls L (2005). “Estimating Rate of Occurrence of Rare Events withy Empirical Bayes”.

[3] Ferson, S. (2003/2004). “Bayesian Methods in Risk Assessment”.

[4] Yasuda, Y. (2003). “Application of Bayesian Inference to Operational Risk Management”.

[5] Trip, M.H. and others (2004). “Quantifying Operational Risk in general Insurance Companies”.

[6] Bank of Mauritius (2005). “Guideline on Operational Risk Management and Capital Adequacy determination”.

[7] Tunkey, J. (2003). “Operational Risk and Professional Risk Management”.

[8] Frachot, A., Georges, P. and Roncalli, T. (2001). “Loss Distribution Approach for Operational Risk”.

[9] Quigley, J., Bedford, T. and Walls, L. (2005). “Estimating Rate of Occurrence of Rare Events withy Empirical Bayes”.

Actuarial Sciences

*connectedthinking©2007 PricewaterhouseCoopers. Alle rechten voorbehouden.

Assurance • Tax • Advisory

of weet jij* een beter moment voor de beste beslissing van je leven?www.werkenbijpwc.nl

2833-08 PwC Adv. Beslissing A4 V1 1 17-09-2007 17:01:47

Page 31: Aenorm 58

31

*connectedthinking©2007 PricewaterhouseCoopers. Alle rechten voorbehouden.

Assurance • Tax • Advisory

of weet jij* een beter moment voor de beste beslissing van je leven?www.werkenbijpwc.nl

2833-08 PwC Adv. Beslissing A4 V1 1 17-09-2007 17:01:47

Page 32: Aenorm 58

32 AENORM 58 January 2008

Effects of invalid and possibly weak instruments

When in a regression model some of the explanatory variables are contemporaneously correlated with the disturbance term, and this correlation is unknown, then one needs further variables in order to find consistent estimators by the method of moments. These instrumental variables should have a known (usually zero) correlation with the disturbances. In practice, however, it is usually difficult to assess whether an instrumental variable satisfies the moment condition. Firstly, instrument validity or orthogonality tests are only viable under just identification or overidentification by truly valid instruments. Moreover, orthogonality tests will have reasonable power only when the instruments employed and those under test are not too weak (are sufficiently correlated with the regressors) and the sample size is substantial. Therefore, it seems very likely that IV estimation will often be employed when some of the instruments are in fact invalid.

Jan Kiviet

is Professor of Econometrics in the Department of Quantitative Economics of the University of Amsterdam. He is a Fellow of the Tinbergen Institute and of the Journal of Econometrics, and Director of the research group UvA-Econometrics.

Jerzy Niemczyk

is a PhD student in the Department of Quantitative Economics of the University of Amsterdam. Before starting his PhD in 2004, he obtained an M.Sc in Mathematics at the Wroclaw University of Technology, and an M.Sc in Economics and Statistics at the Free University of Brussels.

In this paper we consider the asymptotic distri-bution of an IV estimator in a linear regression model when some of its exploited orthogonal-ity conditions actually do not hold. We focus on a single structural equation that otherwise has been specified correctly in the sense that its im-plied series of error terms is IID (independent and identically distributed) with unconditional expectation equal to zero. We cover the general case where the number of moment conditions exploited (l) is at least as large as the number of unknown coefficients, k ≤ l. In terms of pa-rameters and data moments, we derive an ex-pression for the inconsistency of the IV estima-tor and the asymptotic variance of the limiting normal distribution around this inconsistency. These results yield a first-order asymptotic ap-proximation to the actual distribution of incon-sistent IV estimators in finite sample. We verify by simulation whether these analytic findings are accurate regarding the actual estimator dis-tribution in finite sample.In the experiments, we focus on a simple model with one explanatory variable and one instru-ment. From our simulations we establish that an invalid but reasonably strong instrument yield IV estimator which has a distribution in small samples that is rather close to the as-ymptotic approximation derived here. Hence, the distribution of this estimator is often close to normal, but instead of the true value, has its probability mass centered around the pseu-do-true-value. However, when the instrument is very weak, we establish that the accuracy of standard large-sample asymptotics is very poor, as had already been established for the valid instrument case, see for example Bound, Jaeger, and Baker (1995).

Model, assumptions and theorems

We consider data generating processes for vari-ables for which n observations have been col-lected in the rows of y = (y1, ..., yn)’, X = (x1, ..., xn)’ and Z = (z1, ..., zn)’. The matrices X and Z have k and l columns respectively, with l ≥ k. Column vector xi contains the explanatory variables for yi in a linear structural model with structural disturbance , i = 1, ..., n. The l vari-ables collected in Z will be used as instrumen-tal variables for estimating the k structural pa-rameters of interest . The basic framework is characterized by the following parametrization and regularity conditions, which involve linear-ity and stationarity.

Framework 2.1 We have: (i) the structural equa-tion y = ; (ii) with disturbances having (for i ≠ h ≠ 1, ..., n) the (finite) unconditional moments E( ) = 0, E( ) = 0, E( ) = , E(

) = and E( ) = ; (iii) while E(xi | ) = and E(zi | ) = , with and fixed

Econometrics

Page 33: Aenorm 58

33

the result for models with disturbances that have 3th and 4th moments corresponding to those of the normal distribution, and then ob-tain:

Theorem 2.1 If μ3 = 0 and μ4 = 3, we have n1/2(β̂GIV β̂GIV → N(0,VGIV) with

N 2 2GIV ε εX X

X XX X X X X X

X X GIV GIV GIV GIV4

X X εX X2εX X X X

V σ c c c σ c

c

β β' β β'

σ c c

ξξ σ c c

1 25 3 4 4ˆ ˆ'1 1 1

' 4ˆ ˆ ˆ ˆ ˆ ˆ' ' '

'1

' 4 4ˆ ˆ'1 1

4 5ˆ ˆ ˆ ˆ' '

(1 )Σ *

Σ Σ Σ [Σ *

Σ *

Σ Σ ] (1 2 )*

Σ 'Σ (1 2 )

− − −

− −

= − + +

+

+ −

+ −

GIV GIVX X X X-2ε GIV X X

GIV GIV GIV

ξβ' β' ξ

c c c σ β'

β β β'

1 1ˆ ˆ ˆ ˆ' '

5 5 3 '

*

[Σ 'Σ ]

[ (1 2 ) ( Σ *

)] ,

− −+ +

− − +

where c1 ≡ ≡ βGIV, c3 ≡ βGIV, c4 ≡ c1 c2 and c5 ≡ 1 c3 c4.

We find that the limiting distribution of β̂GIV is still genuinely normal when instruments are invalid, although no longer centered at but at the pseudotrue-value . When all instru-ments are valid, i.e. = 0, then Theorem 2.1 specializes to the standard result n1/2(β̂GIV ) → N(0, X Xˆ ˆ' ). For the special case l = k we have βGIV = , and as a result the above asymptotic variance, VIV , simplifies to

2ε Z X Z Z X Z

-2ε IV X X IV IV IV

σ c c c

σ β' β β β'

2 1 1 23 ' ' ' 3 3

'

(1 ) Σ Σ Σ [2 2

1 ( Σ )] ,

− −− − − +

−(4)

where c3 ≡ βIV = . Since for arbitrary ≠ 0 and ≠ 0 the scalar c3 can either be

positive or negative, no general conclusions can be drawn on the behavior of (4) in comparison to the reference case . Depending on the particular parametrization and data mo-ment matrices the asymptotic variance of indi-vidual coefficient estimates may either increase or decrease.When Z = X, which gives and β̂IV = β̂OLS, the resulting VIV = VOLS is the same as the for-mula found for an inconsistent OLS estimator when the disturbances are (almost) normal, as derived in Kiviet and Niemczyk (2007a). For the case where the disturbances may have general 3rd and 4th moment see Kiviet and Niemczyk (2007b).The result in Theorems 2.1 in fact address spe-cial case of Theorem 2 in (Hall and Inoue, 2003, p.369). The latter theorem is more general, but more implicit at the same time. It concerns GMM estimation of a possibly nonlinear mis-specified model and expresses its asymptotic

parameter vectors of k and l elements respec-tively. Moreover, (iv) ≡ plim 1/n X’X, ≡ plim 1/n Z’Z and ≡ plim 1/n Z’X have all full column rank, and (v) so have X’X, Z’Z and Z’X with probability one. Finally, (vi) we have E(1/n Z’Z | Z) = op( n1/ ) and E(1/n X’X | X , Z) = op( n1/ ), where X = (x1, ... , xn)’ and Z = (z 1, ..., zn)’ with xi ≡ E( | ) and z i ≡ zi E(zi | ).

Note that the latter definitions imply the de-compositions X ≡ X + and Z ≡ Z + . From (iii) we find E(X ) = 0 and E(Z ) = 0, whereas

E(X ) = and E(Z’ ) = (1)

Hence, if = 0 for some j ∈ {1, ..., k} then the j-th regressor in X is predetermined and will es-tablish a valid instrument; otherwise, when ≠ 0, the j-th regressor is endogenous. Likewise, if

= 0 for some g ∈ {1, ..., l} then the g-th col-umn of Z establishes a valid instrument, and an invalid instrument otherwise. It can be shown that (vi) boils down to the mild regularity as-sumptions 1/n Z’Z plim 1/n Z ’Z = op( n1/ ) and 1/n X ’Z plim 1/n X ’Z = op( n1/ ).Since l ≥ k the generalized instrumental vari-able (GIV) or 2SLS estimator of exists and is given by

β̂GIV = [X’Z(Z’Z)-1Z’X]-1X’Z(Z’Z)-1Z’y (2) = ( X̂ ’X̂)-1 X̂y,

with X̂ ≡ ZΠ̂, where Π̂ = contains the (reduced form) coefficient estimates of the first-stage regressions. We define the pseudo-true-value of β̂ as

− −

= +

*

2 1 1' ' '

1' '

ˆ

[Σ Σ Σ ] *

Σ Σ

GIV GIV

ε X Z Z Z Z X

X Z Z Z

β plim β

β σ

ς,

(3)

and we denote the inconsistency of β̂GIV by

GIV GIV ε X Xβ β β σ ς,* 2 1

ˆ ˆ'Σ '−≡ − = Π

where we used X Xˆ ˆ' ≡ and ∏ ≡ . Note that the GIV estimator is consistent if and only if = 0.For the special case l = k (just identification), the above GIV results specialize to simple IV, i.e. β̂IV = and βOLS = . When in fact Z = X (all regressors are used as instru-ments), i.e. , then IV specializes to OLS, i.e. β̂OLS = (X’X)−1X’y and βOLS = .Here, for the sake of simplicity, we only present

-1

-1

N

-1 -1

-1

-1

-1

*

-1

-1

N

-1

-1 -1

N N

Econometrics

Page 34: Aenorm 58

34 AENORM 58 January 2008

variance matrix in a few model characteristics and a matrix Ω. This matrix, which is not fur-ther specified, is the variance of the limiting distribution of a particular (2k +l) × 1 vector v* . For the linear model, our approach allows to evaluate explicitly the covariance matrix Ω, and consequently the asymptotic variance of GIV.

Experiments

To illustrate the asymptotic finding presented in the foregoing section, we examine a simple model with one regressor and one instrument,

i i i

i i i

i i i

y βxx x ξz z ζ

εεε

= + ⎫⎪= + ⎬⎪= + ⎭

(5)

where and are scalars1. In the simulation, we employ disturbances that are normally dis-tributed, and without loss of generality we fix = 1. For this model, writing for ,

or for , etc, and using = / , = / , we find that the expression for the inconsistency and the asymptotic variance can be expressed as

2 zε zε εIV ε X Z

xz xz x

σ ρ σβ σ ζ

σ ρ σ1'Σ−= = = (6)

and

2N ε zε xz zε xεIV 2

x xz

zε xε

xz

σ ρ ρ ρ ρV

σ ρ

ρ ρ

ρ

2 2

4

4 2

4

(1 )( )

(1 ).

− −= +

− (7)

The expression for the inconsistency βGIV shows that its sign is determined by the sign of /

, whereas its magnitude is inversely related to the strength of the instrument, cf. Bound, Jaeger, and Baker (1995). VIV is unaffected by the signs of , and as long as the sign of the product remains the same, or when either or is zero. Self-evidently, VIV diverges for approaching zero.

An important characteristic of the model is the signal-to-noise ratio (SN), which is here equal to

2 2 2x x2 2ε ε

β σ σSN

σ σ.= = (8)

From (6) and (7) we see that VIV and β are pro-portional to (the square root of) the inverse of SN and that the approximation to the distribu-tion of the IV estimator

αN

IV IV1

β N β+β, Vn

ˆ ( )��∼ (9)

is completely determined by n and the four model characteristics z, , and SN.In order to generate ( , , zi)’ ∼ IID(0, Ω), with appropriate 3 × 3 covariance matrix Ω, we can first generate vi = (vi,1, vi,2, vi,3)’ ∼ IIN(0, I3) and then parameterize as follows:

= ,x i = ,z i = + .

The above implies

i

i i

i

i,1ε

ε i,2

ε i,3

εx vz

vσ 0 0 σ ξ α 0 v

σ ζ α α v

1 /2

1

2 3

.

⎛ ⎞⎜ ⎟ = Ω =⎜ ⎟⎜ ⎟⎝ ⎠

⎛ ⎞⎛ ⎞ ⎜ ⎟⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠

(10)

Further, in this simple model we have

i i i iIV

i i i i

z y z εβ β+

z x z xˆ ,= =∑ ∑

∑ ∑ (11)

which clarifies that, irrespective of the sample size, the distribution of β̂IV is invariant to the scale of zi. We may also change the sign of all the zi without affecting β̂IV. Therefore, we may restrict ourselves in the illustrations to cases with > 0 and = + + = 1, from which = |(1 )1/2|. Also, without loss of generality, see the full version of the paper, we can restrict ourselves to 0 and normal-ize = 1, which leads to SN = . The rest of the parameters of the DGP will be obtained through the relationships

xε x

x xε

xz xε zε xε

ζ ρ

ξ ρ σ

α σ ρ

α ρ ρ ρ ρ

2 1 /21

2 1 /22

,,

| (1 ) |,

( ) / | (1 ) |.

=

=

= −

= − −

(12)

This reparametrization is useful, because the parameters , and and SN have a di-rect econometric interpretation, viz. the degree of simultaneity, instrument (in)validity, and in-strument strength, whereas SN is directly re-lated to the model fit, which can be expressed as SN/(SN + 1). From the above it follows that by varying the four parameters we can exam-ine the limiting and finite sample distributions of β̂IV over the entire parameter space of this model. Note, however, that not all admissible

N

N

1 This paper contains a few series of graphs only; more complete and animated pictures are available via the web, see: www.feb.uva.nl/ke/jfk.htm

N

Econometrics

Page 35: Aenorm 58

35

values of these parameters will be compatible. For example, when is large and is small, this cannot be compatible with being very large. Moreover, has just an effect on the scale of βGIV, VIV and β̂IV , so we fix / = 10, yielding a population fit of the model of 10/11 = 0.909.The empirical distribution of β̂IV and the as-ymptotic approximation (9) can now be calcu-lated for any set of compatible values of n, ,

, and can be compared in order to see how close they are. Since, when not have finite first or second moments in finite sample, instead of analyzing mean squared er-ror of the estimator, which does not exist then, we analyze its median absolute error (MAE). For a scalar estimator β̂ of , MAE(β̂) is defined as

β β| MAE β)}=ˆ ˆPr{| ( 0.5.− ≤ (13)

From a series of independent Monte Carlo re-aliz we estimate MAE(β̂) by taking the median of the sorted | val-ues. The asymptotic version AMAE(β̂) of MAE(β̂) we asses in the following way. Let the CDF of the normal approximation to the distribution of β̂ be indicated by

ββ,σ x

ˆ( )Φ . Then, for ≡

AMAE(β̂), we have

β β

α

β,σ β,σ

β β| m}= m m

ˆ ˆ

ˆ0.5 Pr{|( ) ( )

= − ≤Φ − Φ −

so that we solve from

β ββ,σ β,σm m

ˆ ˆ( ) 0.5 ( ).Φ = + Φ − (14)

For several different parameter values we will first examine the empirical density functions

and the asymptotic approximation (9). Then, we will compare MAE with AMAE over almost the entire parameter space. We will also compare MAE with . The lat-ter estimator always uses extremely strong in-struments that at the same time are invalid in case of simultaneity.The Figures 1 through 4 contain 4 panels each. In every panel, four densities are presented, black lines for = 50 and red lines for = 200, whereas are for the empirical distri-bution and for its asymptotic ap-proximation. Four panels at each figure have different combinations of and values. The left-hand panels have = 0 (the instrument is valid and the standard asymptotic result ap-plies), and the right-hand panels have = 0.2 (the instrument is invalid and the IV estimator is inconsistent). The two rows of panels cover the cases = 0.3 and = 0.6 (although wedo not have symmetry with respect to , for a sake of brevity and because we noticed that

only the magnitude of can matter, we present our results for positive only. In Figure 1 = 0.8, so the instrument is certainly not weak. In Figure 2, 3 and 4, = 0.3, 0.1 and 0.01 respectively. Hence, in the Figure 4 the instru-ment is certainly weak. In the simulations we used 1,000,000 replications.From Figure 1 we see in the left-hand column that the standard asymptotic approximation of IV when using a valid and strong instrument is quite accurate when the simultaneity is not very serious, but deteriorates when increas-es, especially when is small. We note some skewness and one fat tail (which can not be captured by the first-order normal asymptotic approximation), but the asymptotic distribu tion is never extremely bad for the cases examined. In the right-hand column we see that the new result of Theorem 2.1 is almost of the same quality but slightly less accurate. In Figure 2, where the instrument is weaker, we find that when the instrument is valid the distribution is more skewed, and more so for serious simul-taneity. There is a substantial but not a dra-matic difference between the actual distribution and its approximation. The discrepancies are more pronounced in Figure 3, and affect both the standard ( = 0) and the new ( ≠ 0) asymptotic approximations. From Figure 4 it is clear that the asymptotic approximations are useless (at the sample sizes examined) when the instrument is really weak. When the instru-ment is valid the actual distributions show some median bias, but they are much less dispersed than suggested by .From Figures 1 through 4 we conclude that, ir-respective of whether the instruments are valid or not, one should avoid to use standard large sample asymptotics when instruments are re-ally weak. If one replaces the weak instrument with a strong one that is invalid (which is al-ways possible by reverting to OLS), one obtains an inconsistent estimator, such as depicted in the right-hand column of Figure 1, which has a distribution that is actually much more con-centrated around the true value than that of the consistent estimator depicted in the left-hand column of Figure 4. Figure 5 provides an overview of the (in)accuracy in finite sample of the asymptotic approximation (9) to the actu-al distribution of IV for = 100, expressed as log[MAE /AMAE ]. These figures (based on 10,000 replications) cover all compatible positive values of and , for = 0, 0.1, 0.3 and 0.6. Hence, positive values (yellow, amber) indicate larger absolute errors in finite sample than indicated by the asymptotic ap-proximation and negative values (blue) indicate that standard asymptotics is too pessimistic about the absolute errors of β̂IV in finite sam-ple. Note that this log-ratio is invariant regard-ing the value of SN = / . We find that the degree of simultaneity has little effect, and

N

N

Econometrics

β̂

β̂β̂

β̂β̂ β̂

β̂ β̂

β̂ β̂

Page 36: Aenorm 58

36 AENORM 58 January 2008

Econometrics

β̂ β̂ β̂ β̂

Page 37: Aenorm 58

37

neither has the (in)validity of the instrument . Just instrument weakness (roughly, when || < n1/ ) seriously deteriorates the accuracy of the large-n asymptotic approximation.Figure 6 examines log[MAE(β̂IV)/MAE(β̂IV)], which is also invariant with respect to SN. It shows that in finite sample the absolute estima-tion errors committed by OLS are larger than those of IV only when both and are large. The area where IV beats OLS gets smaller for larger .We also note that OLS may beat IV by a much larger margin (when the instrument is weak and the simultaneity not so serious) than IV will ever beat OLS (which happens when the instru-ment is strong, the simultaneity serious, and the instrument not severely invalid).

Conclusions

In this study we present an explicit formula for the asymptotic variance of the generalized in-strumental variable estimator when some of the employed instruments are invalid. We showed that the limiting distribution of such an incon-sistent estimator is normal, and is centered at the pseudo-true-value, whereas its asymptotic variance includes a number of terms and fac-tors additional to the standard result. To obtain our results we assumed covariance stationarity of all variables. In the simple illustrative mod-els which we used, the data observations are in fact IID, as is often assumed in cross-section applications. Note, however, that our theorems also hold for time-series applications.Our study shows that it is possible to obtain an explicit large sample asymptotic approximation to the distribution of IV estimators when someof the exploited moment conditions are invalid. Not surprisingly, however, that approximation is found to be vulnerable when instruments are weak. One option now would be to replace it by an approximation that aims to cope with weak-ness of instruments. However, our illustrations suggest an alternative approach in which the employment of weak instruments, which invari-ably yields estimators with flat distributions, is abandoned altogether. We saw that exclusively exploiting strong instruments, even if these constitute invalid instruments, we can produce a reasonably accurate approximation to the fi-nite sample distribution. But, to render this ap-proximation feasible one requires information on the simultaneity parameter and the instru-ment invalidity parameter . That seems hard to obtain, and if such information was available other estimators than those obtained by mini-mizing an (in)appropriate GMM criterion func-tion might be better for producing accurate in-ference on the coefficient .

References

Bound, J., Jaeger, D.A. and Baker R.M. (1995). Problems with Instrumental Variables Estimation When the Correlation Between the Instruments and the Endogeneous Explanatory Variable is Weak, Journal of the American Statistical Association, 90(430), 443–450.

Hall, A.and Inoue, A. (2003). The large sam-ple behaviour of the generalized method of moments estimator in misspecified models, Journal of Econometrics, 114(2), 361–394.

Kiviet, J. and Niemczyk, J. (2007a). The as-ymptotic and finite sample distributions of OLS and simple IV in simultaneous equations, Journal of Computational Statistics and Data Analysis, 51(7), 3296–3318.

Kiviet, J. and Niemczyk, J. (2007b). On the limiting and empirical distribution of IV es-timators when some of the instruments are invalid, UvA-Econometrics discussion paper 2006/02.

Econometrics

Page 38: Aenorm 58

38 AENORM 58 January 2008

Dice Games and Stochastic Dynamic Programming

This article considers stochastic optimization problems that are fun and instructive for teaching purposes on the one hand and involve challenging research questions on the other hand. These optimization problems arise from the dice game Pig and the related dice game Hog. Both games are very popular board games in the USA; see reference 1.

Henk Tijms

is professor in Operations Research at the Vrije University in Amsterdam. He studied mathematics at the University of Amsterdam and got his PhD degree in 1972 at the same university. His research interests are in the fields of applied probability and stochastic optimization. He published several textbooks in these fields, including his recent book Understanding Probability. Also, he has a strong interest in the popularization of probability and operations research and developed for this purpose the software package ORSTAT-2000 that can be freely downloaded from his homepage.

The game of PigThe game of Pig involves two players who in turn roll a die. The object of the game is to be the first player to reach 100 points. In each turn, a player repeatedly rolls a die until either a 1 is rolled or the player holds. If the player rolls a 1, the player gets a score zero for that turn and it becomes the opponent’s turn. If the player holds after having rolled a number other than 1, the total number of points rolled in that turn is added to the player’s total score and it becomes the opponent’s turn. At any time du-ring a player’s turn, the player must choose between the two decisions “roll” or “hold”.

The game of HogThe game of Hog (fast Pig) is a variation of the game of Pig in which players have only one roll per turn but may roll as many dice as desired. The number of dice a player chooses to roll can vary from turn to turn. The player’s score for a turn is zero if one or more of the dice come up with the face value 1. Otherwise, the sum of the face values showing on the dice is added to the player’s score.

In this paper we will concentrate on the practi-cal question of how to compute an optimal con-trol rule for various situations. This will done by using the technique of stochastic dynamic pro-gramming. The computations reveal that the optimal control rule has certain structural pro-perties, but the interesting research problem of giving a mathematical proof of the structural properties is left open. The optimality equations of dynamic programming provide not only the tool for numerical computations, but are also the key to the theoretical optimality proofs. An even more exciting open research problem is the game-theoretic variant of the problems above. In this variant the two players have to take simultaneously decisions in each round the game rather than sequentially, where the play-ers cannot observe each other’s actions in any given round of the game but know each other’s

score after each round.How is the paper organized? First we will analy-ze for the game of Pig the single-player version for various optimality criteria. Then the two-players version of the game of Pig is analyzed. Next, we discuss the game of Hog. The analy-sis is very similar to that of the game of Pig. Finally, we briefly discuss the game-theoretic variant of the game of Hog.

The game of Pig

For the single-player version the following two optimality criteria can be considered:

• minimal expected number of turns to reach 100 points

• maximal probability of reaching 100 points in a given number of turns.

The optimal control rules can be calculated from the optimality equations from stochastic dyna-mic programming, but these optimal rules are rather complex and difficult to use in practice. Therefore we also consider the simple “hold at 20” heuristic and compare the performance of this heuristic with the performance of the op-timal rule. The “hold at 20” rule is as follows: after rolling a number other than 1 in the cur-rent turn, the player holds that turn when the accumulated number of points during the turn

ORM

Page 39: Aenorm 58

39

is 20 or more.The rationale of this simple heuristic is easily explained. Suppose that points have been accumulated so far in the current turn. If you roll again, the expected number of points you gamble away is 1/6 × , while the expected number of additional points you gain is equal to 5/6 × 4, using the fact the expected value of the outcome of a roll of a die is 4 given that the outcome is not 1. The first value of for which 1/6 × 5/6 × 4 is = 20.It turns out that the “hold at 20” heuristic per-forms very well when the criterion is to mini-mize the expected number of turns to reach 100 points. As will be shown below, the expec-ted value of the number of turns to reach 100 point is 12.545 when the “hold at 20” heuristic is used and this lies only 0.7% above the mi-nimal expected value 12.367 that results when an optimal control rule is used. The situation is different for the criterion of maximizing the pro-bability of reaching 100 points when no more than turns are allowed with a given integer. Under the “hold at 20” heuristic the probabi-lity of reaching 100 points within turns has the respective values 0.0102, 0.0949, 0.3597, 0.7714, and 0.9429 for = 5, 7, 10, 15, and 20, whereas this probability has the maximum values 0.1038, 0.2198, 0.4654, 0.8322, and 0.9728 when an optimal rule is used.

Dynamic programming for the single-play-er version

In the dynamic programming analysis of the single-player version, a state variable should be defined together with a value function. The state s of the system is defined by a pair =

, where

= the player’s score at the start of the current turn = the number of point obtained so far in the current turn.

We first consider the criterion of minimizing the expected number of turns to reach 100 points. For this criterion, the value function is de-fined by

= the minimal expected value of the num-ber of turns including the current turn to reach 100 points starting from state .

We wish to compute together with the optimal decision rule. This can be done from Bellman’s optimality equations. For any and < 100,

r

V i k V i k V i

V i k r6

2

1( , ) min[ ( ,0), ( ,0)6

1 ( , )],6

=

= + +

+∑

where = 0 for those with 100. The first term in the right side of the last equation corresponds to the decision “hold” and the second term corresponds to the decision “roll” (of course in state , the action “roll” is always taken). The optimality equation can be solved by the method of successive substituti-ons. Starting with = 0 for all , the func-tions , , . . . are recursively computed from

− −

−=

= + +

+∑

1 1

6

12

1( , ) min[ ( ,0), ( ,0)6

1 ( , )].6

n n n

nr

V i k V i k V i

V i k r

By a basic result from stochastic dynamic pro-gramming,

nnV s V slim ( ) ( )

→∞= for all .

In the literature bounds are known for the diffe-rence , providing a stopping crite-rion for the method of successive substitutions. It appears from the numerical computations that the structure of the optimal policy is of the control-limit type. An open research question is to give a mathematical proof that the opti-mal policy is always of this specific structure. Another interesting problem is to prove a kind of turnpike-result, that is, the optimal rule uses the decision from the “hold at 20” heuristic as long as the player is sufficiently far away from

ORM

Page 40: Aenorm 58

40 AENORM 58 January 2008

TreasuringTalent.com

Deloitte zoekttalent

Deloitte zoekt toptalent. Altijd. Want toptalent levert topprestaties. En dat is precies wat Deloitte wil: de beste zijn. In dienstverlening. En in knowhow.

Toptalent zoeken we dus ook voor Consulting, Enterprise Risk Services en Financial Advisory Services. We hebben altijd ruimte voor ambitieuze starters.

Nieuwsgierige studenten die grenzen kunnen verleggen en beschikken over goede analytische vaardigheden. Gedreven om de top te bereiken.

Erop gebrand het beste uit henzelf te halen. Cliëntgerichte zelfstandige werkers, maar tegelijkertijd teamplayers. Toppers dus. Ben jij dat? Breng dan jouw

talent in de praktijk!

Deloitte Consulting Deloitte Consulting adviseert de top van het (inter-)nationale bedrijfsleven

en veel (semi-)overheidsorganisaties over complexe strategische en

organisatorische vraagstukken. We bieden waar mogelijk een totaaloplossing:

van strategie tot en met implementatie. Deloitte Consulting adviseert op

het gebied van Corporate Strategy, Financial Management en Change

Management. Maar ook over kostenreductietrajecten, de wereldwijde uitrol

van SAP- en Oracle-applicaties, CRM-oplossingen, het ontwikkelen van ICT

maatwerkoplossingen en IT Strategie. Daarnaast geven onze consultants

strategisch marketingadvies en ondersteunen zij organisaties bij Supply Chain

Management.

Deloitte Enterprise Risk ServicesDeloitte Enterprise Risk Services (ERS) adviseert en ondersteunt multinationals,

nationale bedrijven, de overheid en non-profi tinstellingen bij het signaleren,

analyseren, beoordelen en managen van risico’s. Deze risico’s variëren

van boardroom risico’s op strategisch niveau tot technische risico’s op

netwerkniveau. De betrouwbaarheid van bedrijfsprocessen, informatie en

technologie is het werkterrein van ERS. Werkzaamheden lopen uiteen van

vraagstukken op het gebied Corporate Governance, internal/operational

auditing, IT-auditing, proces- en systeemrisico’s en data-analyse tot complexe

technische vraagstukken over informatiebeveiliging, infrastructuurbeveiliging,

ethical hacking en identitymanagement. Tevens kan je bij ons als

softwarespecialist werken aan het Deloitte INVision platvorm.

Deloitte Financial Advisory ServicesDeloitte Financial Advisory Services (FAS) houdt zich onder andere bezig

met complexe fi nanciële transacties, kapitaalmarktvraagstukken, vastgoed

en risicobeheersing. Financiële specialisten op uiteenlopende terreinen

bundelen hier hun kennis en ervaring in de volgende Service Lines:

Transaction Advisory (Corporate Finance & Transaction Services), Capital

Markets (Treasury, Energy & Quantitative Modelling), Actuarial & Employee

Benefi ts en Real Estate Advisory (Management Consulting & Development

& Consulting). De werkzaamheden lopen uiteen van fi nancieel advies bij

grote bedrijfsovernames tot het realiseren van fi nanciering voor grote

nieuwbouwprojecten. Maar ook IFRS, fi nancial modelling en waardebepaling

van contracten en pensioenen komen aanbod.

Interesse?Ben jij het toptalent dat Deloitte zoekt?

Kijk dan op www.treasuringtalent.com of neem contact op met

Olivier Wilmink (Consulting) op 020 - 454 71 31 of Lisette van Alphen

(Enterprise Risk Services & Financial Advisory Services) op 020 - 454 74 64.

DET378_Consulting_ERS_FAS_A4.ind1 1 13-09-2007 14:39:17

Page 41: Aenorm 58

41

TreasuringTalent.com

Deloitte zoekttalent

Deloitte zoekt toptalent. Altijd. Want toptalent levert topprestaties. En dat is precies wat Deloitte wil: de beste zijn. In dienstverlening. En in knowhow.

Toptalent zoeken we dus ook voor Consulting, Enterprise Risk Services en Financial Advisory Services. We hebben altijd ruimte voor ambitieuze starters.

Nieuwsgierige studenten die grenzen kunnen verleggen en beschikken over goede analytische vaardigheden. Gedreven om de top te bereiken.

Erop gebrand het beste uit henzelf te halen. Cliëntgerichte zelfstandige werkers, maar tegelijkertijd teamplayers. Toppers dus. Ben jij dat? Breng dan jouw

talent in de praktijk!

Deloitte Consulting Deloitte Consulting adviseert de top van het (inter-)nationale bedrijfsleven

en veel (semi-)overheidsorganisaties over complexe strategische en

organisatorische vraagstukken. We bieden waar mogelijk een totaaloplossing:

van strategie tot en met implementatie. Deloitte Consulting adviseert op

het gebied van Corporate Strategy, Financial Management en Change

Management. Maar ook over kostenreductietrajecten, de wereldwijde uitrol

van SAP- en Oracle-applicaties, CRM-oplossingen, het ontwikkelen van ICT

maatwerkoplossingen en IT Strategie. Daarnaast geven onze consultants

strategisch marketingadvies en ondersteunen zij organisaties bij Supply Chain

Management.

Deloitte Enterprise Risk ServicesDeloitte Enterprise Risk Services (ERS) adviseert en ondersteunt multinationals,

nationale bedrijven, de overheid en non-profi tinstellingen bij het signaleren,

analyseren, beoordelen en managen van risico’s. Deze risico’s variëren

van boardroom risico’s op strategisch niveau tot technische risico’s op

netwerkniveau. De betrouwbaarheid van bedrijfsprocessen, informatie en

technologie is het werkterrein van ERS. Werkzaamheden lopen uiteen van

vraagstukken op het gebied Corporate Governance, internal/operational

auditing, IT-auditing, proces- en systeemrisico’s en data-analyse tot complexe

technische vraagstukken over informatiebeveiliging, infrastructuurbeveiliging,

ethical hacking en identitymanagement. Tevens kan je bij ons als

softwarespecialist werken aan het Deloitte INVision platvorm.

Deloitte Financial Advisory ServicesDeloitte Financial Advisory Services (FAS) houdt zich onder andere bezig

met complexe fi nanciële transacties, kapitaalmarktvraagstukken, vastgoed

en risicobeheersing. Financiële specialisten op uiteenlopende terreinen

bundelen hier hun kennis en ervaring in de volgende Service Lines:

Transaction Advisory (Corporate Finance & Transaction Services), Capital

Markets (Treasury, Energy & Quantitative Modelling), Actuarial & Employee

Benefi ts en Real Estate Advisory (Management Consulting & Development

& Consulting). De werkzaamheden lopen uiteen van fi nancieel advies bij

grote bedrijfsovernames tot het realiseren van fi nanciering voor grote

nieuwbouwprojecten. Maar ook IFRS, fi nancial modelling en waardebepaling

van contracten en pensioenen komen aanbod.

Interesse?Ben jij het toptalent dat Deloitte zoekt?

Kijk dan op www.treasuringtalent.com of neem contact op met

Olivier Wilmink (Consulting) op 020 - 454 71 31 of Lisette van Alphen

(Enterprise Risk Services & Financial Advisory Services) op 020 - 454 74 64.

DET378_Consulting_ERS_FAS_A4.ind1 1 13-09-2007 14:39:17

the desired number of points.Let us next consider the optimality criterion of maximizing the probability of reaching 100 points in no more than N turns with N a given integer. Then, we define for m = 0, 1, . . . , N the value functions Pm(s) by

Pm(s) = the maximal probability of reaching 100 points from state s when no more than m turns can be used including the current turn,

where Pm(s) = 1 for all s = (i,k) with i + k ≥ 100. The desired probability PN(0, 0) and the optimal decision rule can be calculated from Bellman’s optimality equation. For i = 99, 97, . . . , 0 and k = 100 - i, . . . , 0,

m m m

mr

P i k P i k P i

P i k r

1 1

6

2

1( , ) min[ ( ,0), ( ,0)6

1 ( , )].6

− −

=

= + +

+∑

The value functions P1(s), P2(s), . . . , PN(s) can be recursively calculated, using the fact that Pm(i, k) = 1 if i + k ≥ 100 and starting with

i kP i k

i k01 if 100

( , )0 if 100

+ ≥⎧= ⎨

+ <⎩

The analysis for the “hold at 20” heuristic pro-ceeds along similar lines as the analysis for an optimal rule. In the case of the heuristic there is only one decision possible for each state. Thus, defining the function H(s) as the mini-mal expected number of turns including the current turn needed to reach 100 points from state s when using the “hold at 20” heuristic, the equations for V(s) are easily modified to get the equations for H(s). The same observation applies to the probability of reaching 100 points within n turns. Another approach for the “hold at 20” heuristic is to use an absorbing Markov chain; see reference 2.

Dynamic programming for the two-players case

To conclude this section, we consider for the game of Pig the original case of two players. The players alternate in taking turns rolling the die. The first player to reach 100 points is the winner. Since there is an advantage in going first in Pig, it is assumed that a toss of a fair coin decides which player begins in the game of Pig. Then, under optimal play of both players, each player has a probability of 50% of being the ultimate winner. But how to calculate the optimal decision rule? The dynamic program-ming solution proceeds as follows. The state s is now defined by s = ((i,k), j), where (i,k) indi-cates that the player whose turn it is has a sco-

re i and has k points accumulated so far in the current turn and j indicates that the opponent’s score is j. Define the value function P(s) by

P(s) = the probability of the player winning whose turn it is given that the present state is state s,

where P(s) is taken to be equal to 1 for those s = ((i,k), j) with i + k ≥ 100 and j < 100. To write down the optimality equations, we use the simple observation that the probability of a player winning after rolling a 1 or holding is one minus the probability that the other player will win beginning with the next turn. For state s = ((i, k), j) with k ≥ 0 and i + k, j < 100,

=

= − +

− +

+∑6

2

(( , ), ) min[1 (( ,0), ),1 [1 (( ,0), )]6

(( , ), )],r

P i k j P j i k

P j i

P i k r j

where the first expression in the right side of the last equation corresponds to the decision “hold” and the second expression corresponds to the decision “roll”. Using the method of suc-cessive substitution, these optimality equations can be numerically solved, yielding the optimal decision to take in any state s = ((i,k), j).

The game of Hog

In the game of Hog (Fast Pig) the player has to decide in each turn how many dice to roll simul-taneously. A similar heuristic as the “hold at 20” rule manifests itself in the game of Hog (Fast Pig). This heuristic is the “five dice’ that prescri-be to roll five dice in each turn. The rationale of this rule is as follows: five dice are the optimal number of dice to roll when the goal is to maxi-mize the expected value of the score in a single turn. The expected value of the total score in a single turn with d dice is (1-(5/6)d)×0+(5/6)d × 4d and this expression is maximal for d = 5. In the single-player version of the game, the num-ber of turns needed to reach 100 points has the expected value 13.623 when the “five dice” rule is used, while the expected value of the number of turns needed under 13.039 when an optimal decision rule is used.Again, a very good performance of the heuristic rule when the criterion is to minimize the ex-pected number of turns. However, the story is different when the criterion is to maximize the probability of reaching 100 points in no more than N turns with N given. The analysis of the game of Hog is very similar to that of the game of Pig both for the single-player version and the the two-players version.

Remark: A challenging variant of the game of

ORM

Page 42: Aenorm 58

42 AENORM 58 January 2008

Hog arises when the two players have to take simultaneously a decision in each round of the game. Think of the following television game show. Two contestants each sit behind a panel with a battery of buttons numbered as 1, 2, . . . , 10. In each stage of the game, the two contes-tants must simultaneously press one of the but-tons, where they cannot observe each other’s decision. The number pressed is the number of dice the contestant must throw. The score of the contestant’s throw is added to his/her total provided that no 1 was thrown; otherwise no points are added to the current total of the can-didate. The candidate who first reaches a total of 100 points is the winner. In case both candi-dates reach the goal of 100 points in the same move, the winner is the candidate who has the largest total. In the event of a tie, the winner is determined by a toss of a fair coin. At each stage of the game both candidates have full in-formation about his/her own current total and the current total of the opponent. What does the optimal strategy look like? The computation and the structure of an optimal strategy is far more complicated than in the problems discus-sed before. The optimal rules for the decision problems considered before were determinis-tic, but for the problem of the television game show the optimal strategy will involve randomi-zed actions. This problem has still many open questions; see reference 3.

Literature

[1] Neller, T.W. and Presser, C.G.M. (2004). Optimal play of the dice game Pig, The UMAP Journal, 25,25-47 (see also the material on the website http://cs.gettysburg.edu/pro-jects/pig/).

[2] Tijms, H.C. (2007). Understanding Probability, Chance Rules in Everyday Life, 2nd edition, Cambridge University Press, Cambridge.

[3] Tijms, H.C. and Van der Wal, J. (2006). A real-world stochastic twoperson game, Probability in the Engineering and Informational Sciences, 25, 1-12.

ORM

Geloof het of niet, maar bij ons begintje dag met het lezen van de krant.

De kleinste gebeurtenis aan de andere kant van de wereld kan de grootste gevolgen hebben op jouw werk. Je moet op alles voorbereid

zijn. Daarom begint je dag met het lezen van de krant. IMC is gespecialiseerd in aandelen- en derivatenhandel voor eigen rekening. Naast

Trading zijn we toonaangevend in Brokerage, Consultancy in derivaten en Asset Management. De ruim 400 medewerkers zijn verdeeld

over het hoofdkantoor in het Financiële centrum van Amsterdam - Zuid WTC - en vestigingen in Chicago, Sydney, Zug en Hong Kong. We

bestaan sinds 1989 en zijn een niet-hiërarchische, dynamische en jonge organisatie, waar innovatie en ondernemerschap voorop staan.

Onze cultuur kenmerkt zich door professionaliteit, gedrevenheid en teamspirit. Vanwege de groei zijn we voortdurend op zoek naar:

Traders / Strategisten

Na een traineeship van een jaar - dat bestaat uit een theoretisch, praktisch en strategisch gedeelte - kijken we welke richting het beste

bij je past. Verder stimuleren we dat je in één van onze buitenlandse vestigingen kennis en ervaring opdoet. Je profiel: een afgeronde

academische studie, uitmuntende rekenkundige en analytische vaardigheden en 0 tot 3 jaar werkervaring. Goede IT-skills zijn een pré.

Je bent van nature een teamplayer, innovatief, resultaatgericht en je wilt graag winnen. Kijk op onze site www.imc.nl of stuur direct je cv

en motivatie naar [email protected]. Voor meer informatie kun je terecht bij Margo Nederhand, Recruiter (020 - 798 85 12).

IMC (International Marketmakers Combination), Strawinskylaan 377, 1077 XX Amsterdam, www.imc.nl

Trading globally

IMCadv_210x297_Paper.indd 1 21-06-2007 09:58:28

Page 43: Aenorm 58

43

Geloof het of niet, maar bij ons begintje dag met het lezen van de krant.

De kleinste gebeurtenis aan de andere kant van de wereld kan de grootste gevolgen hebben op jouw werk. Je moet op alles voorbereid

zijn. Daarom begint je dag met het lezen van de krant. IMC is gespecialiseerd in aandelen- en derivatenhandel voor eigen rekening. Naast

Trading zijn we toonaangevend in Brokerage, Consultancy in derivaten en Asset Management. De ruim 400 medewerkers zijn verdeeld

over het hoofdkantoor in het Financiële centrum van Amsterdam - Zuid WTC - en vestigingen in Chicago, Sydney, Zug en Hong Kong. We

bestaan sinds 1989 en zijn een niet-hiërarchische, dynamische en jonge organisatie, waar innovatie en ondernemerschap voorop staan.

Onze cultuur kenmerkt zich door professionaliteit, gedrevenheid en teamspirit. Vanwege de groei zijn we voortdurend op zoek naar:

Traders / Strategisten

Na een traineeship van een jaar - dat bestaat uit een theoretisch, praktisch en strategisch gedeelte - kijken we welke richting het beste

bij je past. Verder stimuleren we dat je in één van onze buitenlandse vestigingen kennis en ervaring opdoet. Je profiel: een afgeronde

academische studie, uitmuntende rekenkundige en analytische vaardigheden en 0 tot 3 jaar werkervaring. Goede IT-skills zijn een pré.

Je bent van nature een teamplayer, innovatief, resultaatgericht en je wilt graag winnen. Kijk op onze site www.imc.nl of stuur direct je cv

en motivatie naar [email protected]. Voor meer informatie kun je terecht bij Margo Nederhand, Recruiter (020 - 798 85 12).

IMC (International Marketmakers Combination), Strawinskylaan 377, 1077 XX Amsterdam, www.imc.nl

Trading globally

IMCadv_210x297_Paper.indd 1 21-06-2007 09:58:28

Page 44: Aenorm 58

44 AENORM 58 January 2008

A Bayesian Approach to Medical Reasoning

Medical reasoning is a complex form of human problem-solving that requires the physician to take appropriate action in a world that is characterized by uncertainty. Dynamic Bayesian networks are put forward as a framework that allows the representation and solution of medical decision problems, and their use is exemplified by a case study in oncology.

Marcel van Gerven

studied cognitive science at the Radboud University Nijmegen and has recently obtained his PhD degree at the computer science department of the same university. He conducted research at the Artificial Intelligence Department at the UNED, Madrid and worked in collaboration with researchers at the Netherlands Cancer Institute on the use of dynamic Bayesian networks in the domain of clinical oncology. Currently, he is working as a postdoctoral researcher on the topic of brain-computer interfacing in collaboration with researchers at the F. C. Donders Institute for Cognitive Neuroimaging.

“You will have to undergo chemotherapy”. If your physician delivers you this message then this will come to you as a shock, but having faith in the physician’s expertise you will com-ply and undergo this dangerous intervention. However, there are many factors that influence the desirability of the intervention such as pa-tient history, presence of symptoms, availabil-ity of alternatives, and the uncertainty about all these factors. How then does our physician come up with such a decision?

In this paper, I will make the case for dynamic Bayesian networks as a formalism for the rep-resentation of medical knowledge and the exe-cution of medical tasks such as diagnosis, prog-nosis, monitoring, and treatment of patients. Note that this formalism is not meant to be descriptive (i.e., describing how the physician thinks) but rather normative in the sense that it formalizes how the physician should act in the light of available evidence. In the following, I will describe (dynamic) Bayesian networks in a medical context and demonstrate their use by means of a case study in oncology. I will end this paper with some thoughts about medical reasoning.

(Dynamic) Bayesian networks

A Bayesian network B = (G,P) is a pair where G is an acyclic directed graph, with nodes cor-responding to a set of random variables X, and P is a joint probability distribution (JPD) of vari-ables in X (Pearl, 1988). For the sake of sim-plicity, we will assume that random variables are discrete, such that P(· | ·) can be specified by a finite look-up table. The JPD factorizes as:

( ) ( )( )| GX

P P X π X∈

= ∏X

X

where πG(X) denotes the parents of X in G. For the sake of simplicity, we will assume that ran-dom variables are discrete, such that the con-

ditional probability distributions can be speci-fied by a finite look-up table. The factorized JPD embodies our domain knowledge. For example, the medical knowledge that some disorder D leads to a symptom F can be represented by a simple structure D → F and associated prob-ability distributions P(F|D) and P(D). Often, the arcs can be given a causal interpretation, as in this example, but this is not required. For in-stance, the structure F → D has no such causal interpretation, but its associated distributions P(D|F) and P(F) can model exactly the same JPD. The graph and associated distributions can be learnt from data or constructed by hand. This means that we may use patient data to learn a Bayesian network or interview expert phy-sicians in order to obtain the required knowl-edge. In our research, we have taken the latter approach, which amounts to the assignment of subjective (Bayesian) priors to the distributions of interest.

The factorized representation of a JPD generally reduces the number of parameters that need to be estimated and allows for efficient proba-bilistic inference. For example, we may com-pute the probability of a disorder D given an observed finding F, which is a simple example of diagnostic reasoning. Note that this form of diagnostic reasoning goes against the direction of the arcs and is directly interpretable as the

ORM

Page 45: Aenorm 58

45

application of Bayes’ rule. An example of a system that is based on these principles is Promedas (Kappen and Neijt, 2002), which covers a large diagnostic reper-toire of internal medicine by relating risk factors Ri, disorders Dj and findings Fk (Fig. 1). In simi-lar ways, we could define network structures for monitoring, prognosis and treatment.

Figure 1: Promedas models associations between risk factors Ri, disorders Dj and findings Fk.

Although systems such as Promedas perform well in clinical tasks such as differential diag-nosis, they often make unrealistic assumptions such as the independence between findings giv-en the disease. This affects both the accuracy of computed posterior probability distributions and the ability to understand how domain variables interact. In practice, one often needs detailed information about the causal mechanisms that are responsible for observed findings. The use of causality as a guiding principle when building a Bayesian network for clinical decision support is advantageous, since knowledge concerning pathophysiology and the effect of treatment is normally described in the medical literature in terms of causes and effects. Causal models also facilitate the explanation of drawn conclusions, which may increase the acceptance of automat-ed assistance in medicine, both by the physi-cian and by the patient (Teach, 1984).

It is important to realize that the representa-tion of pathophysiology as such is insufficient for guiding treatment, since clinical decision support often requires the suggestion of ap-propriate action. In other words, automatically obtaining a differential diagnosis is beneficial in the sense that the physician is less likely to misdiagnose, but does not always give insight into the optimal treatment given the differential diagnosis. Hence, it is often necessary to repre-sent the decision-theoretical notions of actions and utility as well. Fortunately, this is easily achieved by representing treatment options as well as the desirability of treatment outcomes as random variables themselves.

One last and important factor that is not ad-dressed by standard Bayesian networks is the temporal nature of medical problems. During diagnosis, to know the temporal order and du-

ration of symptoms can influence the diagnostic conclusions, the selection of treatments or tests may depend on the time at which the selection is made, during prognosis, the disease dynam-ics is described as the unfolding of events over time, and during monitoring, we need to track the patient’s pathophysiological status over time. In case we are dealing with problems of a tem-poral nature, we explicitly include time within a Bayesian network, by reasoning over random processes = { X(t) : t ∈ T } instead of random variables. We will focus here on discrete-time and discrete-space random processes. The re-sulting model is known as a dynamic Bayesian network. If it is assumed that the Markov prop-erty holds, which states that the future is in-dependent of the past given the present, we obtain the following factorization of the JPD:

( ) ( ) ( )( )( )( ) ( )

| Gt T X t t

P P X t π X t∈ ∈

=∏ ∏X

X

with X(t) = { X(t) : X ∈ X }. We will focus on discrete-time and discretespace random proc-esses, which implies that T ⊆ N. If the structure of the dynamic Bayesian network is invariant for all times t ∈ {0,1,2,...} then it can be speci-fied in terms of: - a prior model Bt = (G0,P0) such that

( )( ) ( ) ( )( )( )

( ) ( )00

0 0

0 0 | 0GX

P P X π X∈

= ∏X

X

specifies the initial distribution of the joint process at time 0, and

- a transition model B0 = (Gt,Pt) such that

( ) ( )( )( )( ) ( )( )( )

( ) ( )∈

=

∏|

|t

t

t GX t t

P t π X t

P X t π X tX

X

specifies the evolution of the process as it moves from time t - 1 to time t for

t ∈{1,2,...}.

For dynamic Bayesian networks, posterior prob-abilities can be computed by means of special-ized inference algorithms such as the exact interface algorithm (Murphy, 2002) or approxi-mate particle filtering (Koller and Lerner, 2001). For medical tasks we are often interested in fil-tering (computing posterior probabilities of un-observed random variables at the current time given evidence) and prediction (computing pos-terior probabilities of unobserved random vari-ables at some future time given evidence).

ORM

Page 46: Aenorm 58

46 AENORM 58 January 2008

Case study: High-grade carcinoid tumours

Here we demonstrate how the desiderata of causality, decision-making, and temporal evo-lution can be combined in a dynamic Bayesian network that is able to solve problems in clinical oncology.

A carcinoid tumour is a type of neuroendocrine tumour that is predominantly found in the mid-gut and is normally characterized by the pro-duction of excessive amounts of biochemically active substances, such as serotonin. These neuroendocrine tumours are often differenti-ated according to the histological findings and in a small minority of cases tumours are of high-grade histology. For these tumours, ag-gressive chemotherapy is the only remaining treatment option (Moertel et al., 1991). We have constructed a high-grade carcinoid model in collaboration with an expert physician at the Netherlands Cancer Institute and Fig. 2 de-picts the structure of this model. Since patients return to the clinic for follow-up every three months, we assume that each time-slice repre-sents the patient status at three-month inter-vals, at which time treatment can be adjusted. For a detailed description of all variables, see van Gerven (2007).

Figure 2: A dynamic Bayesian network for high-grade carcinoid tumour pathophysiology with prior model B0 and transition model Bt, where shaded variables are observable and unshaded variables are hidden.

Figure 2 represents causal knowledge such as the fact that the response (resp) of the pa-tient to chemotherapy (chemo) influences the tumour mass (mass). This in turn influences the general health status (ghs), which deter-mines the patient’s quality of life (qol). This model can be used to make a prognosis about the patient’s future situation, but also to auto-matically suggest whether or not to administer chemotherapy, as this is captured by the poste-rior over chemo after patient evidence has been entered.

Whether or not to administer chemotherapy is thought to depend on patient health, treatment history (treathist), and past findings with re-gard to bone-marrow depression (bmdhist). An important task related to treatment is to find an

optimal policy for chemo that determines which treatment decision to make under all possible conditions. This policy is expressed as the con-ditional distribution P(chemo | ghs, treathist, bmdhist). In van Gerven (2007), it is shown that the optimal policy can be approximated using simulated annealing, where the impact of local changes in P(chemo | ghs, treathist, bmdhist) on the desired outcome (in terms of quality of life and treatment cost) is computed. Using this algorithm, we have found the same treatment policy that is used by physicians in clinical practice.

Some thoughts about medical reasoning

In this paper, we have examined the use of dy-namic Bayesian networks as a framework for reasoning under uncertainty in medicine. This begs the question of how physicians reason in practice. Do they act according to probability theory when making an inference and do they act according to decision theory when mak-ing a decision? In other words, are probabil-ity and decision theory just normative, or are they descriptive as well? The literature about the cognitive biases and heuristics displayed in humans in general (Kahneman et al., 1982) and physicians in particular (Borstein and Emier, 2001) suggests no. However, recent research has also demonstrated that some of the biases disappear when questions are posed in a less artificial way (Gigerenzer, 2000). The emerg-ing framework of naturalistic decision-making (Klein et al., 1993) recognizes the importance of these observations, and dictates that we should consider decision-making in a natural setting, where we need to deal with stress, time pressure, fatigue, and communication patterns, as well as with the bounded rationality of hu-mans due to information-processing constraints (Simon, 1955). Under that interpretation, heu-ristics are not viewed as erroneous, but rather as effective strategies for real-world decision-making (Patel et al., 2002).

One particularly influential view of problem-solving in medicine is the hypothetical-deduc-tive approach (Elstein et al., 1978), which is an iterative process where hypotheses are gener-ated according to the available data, and hy-potheses in turn guide the selection of new data. It is found that expert physicians generate the correct hypothesis early on and use the re-maining time to confirm and refine the hypoth-esis, whereas less experienced physicians take longer to decide upon the final hypothesis due to an inability to eliminate incorrect alternatives (Joseph and Patel, 1990). Another observation is that although expert physicians have more extensive knowledge about pathophysiologi-cal processes, they tend to make less use of it than non-experts, and base themselves more

ORM

Page 47: Aenorm 58

47

on clinical experience. One explanation of this effect is the notion of knowledge encapsulation, which suggests that explicit pathophysiological knowledge is represented by the expert in com-piled form, while still being retrievable if neces-sary (Boshuizen and Schmidt, 1992).The picture which emerges is one where expert physicians rapidly recognize the correct hy-pothesis, while still being able to give a causal explanation of how they arrive at a hypothesis. Our own experiences suggest that expert phy-sicians may indeed operate in this way. During the initial phase of knowledge elicitation the physician often jumped to conclusions, associ-ating findings with expected outcomes, where-as after requiring a causal explanation, it be-came possible to explain associations in terms of cause-effect relations.

From the point of view of knowledge engineer-ing, we emphasize that the translation of a physician’s knowledge into a dynamic Bayesian network is difficult and time-consuming, which implies a trade-off between the amount of time one is willing to spend and the quality of the re-sulting system. When one believes intervention to be the ultimate goal of clinical reasoning, as-sociative models such as Promedas can perform just as well as causal models provided that they lead to the same actions. However, often, not only intervention but also the explanation of in-tervention is required. Furthermore, associative models are difficult to extend as new knowl-edge becomes available. Therefore, if the aim is to create a flexible system that represents domain knowledge to a high degree of detail, then one should consider the incorporation of causality, decision-making, and time into the framework of dynamic Bayesian networks, as is advocated in this paper.

References

Borstein, B.H. and Emier, A.C. (2001). Rationality in medical decision making: A review of the literature on doctors’ decision-making biases. J Eval Clin Pract, 7, 97-107.

Boshuizen, H.P.A. and Schmidt, H.G. (1992). On the role of biomedical knowledge in clini-cal reasoning by experts, intermediates and novices. Cognit Sci, 16, 153-184.

Elstein, A.S., Shulman, L.S., and Sprafka, S.A. (1978). Medical Problem Solving: An Analysis of Clinical Reasoning. Harvard University Press, Cambridge, MA.

Gerven, M. A. J. van (2007). Bayesian Networks for Clinical Decision Support. PhD thesis, Radboud University Nijmegen, Nijmegen, the Netherlands.

Gigerenzer, G. (2000). Adaptive Thinking: Rationality in the real world. Oxford University Press, New York, NY.

Joseph, G.-M. and Patel, V.L. (1990). Domain knowledge and hypothesis generation in di-agnostic reasoning. Med Decis Making, 10, 31-46.

Kahneman, D., Slovic, P., and Tversky, A., edi-tors (1982). Judgment under uncertainty: Heurstics and biases. Cambridge University Press, Cambridge, UK.

Kappen, H.J. and Neijt, J.P. (2002). Promedas, a probabilistic decision support system for medical diagnosis. Technical report, Stichting Neurale Netwerken, Nijmegen, The Netherlands.

Klein, A., Orasanu, J., Calderwood, R. and Zsambok, C.E., editors (1993). Decision Making in Action: Models and Methods. Ablex Publishing Corporation, Norwood, NJ.

Koller, D. and Lerner, U. (2001). Sampling in factored dynamic systems. In Doucet, A., de Freitas, N., and Gordon, N., editors, Sequential Monte Carlo Methods in Practice}, chapter 21, pages 445—464. Springer-Verlag, San Francisco, CA.

Moertel, C.G., Kvols, L.K., O’Connell, M.J. and Rubin, J. (1991). Treatment of neuroendo-crine carcinomas with combined etoposide and cisplatin. Evidence of major therapeutic activity in the anaplastic variants of these ne-oplasms. Cancer, 68(2), 227-232.

Murphy, K.P. (2002). Dynamic Bayesian Networks. PhD thesis, UC Berkeley, Berkeley, CA.

Patel, V.L., Kaufman, D.R. and Arocha, J.F. (2002). Emerging paradigms of cognition in medical decision-making. J Biomed Informat, 35, 52-75.

Pearl, J. (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Francisco, CA, 2nd edition.

Simon, H.A. (1955). A behavioral model of ra-tional choice. Q J Econ, 69(1), 99-118.

Teach, R.L. and Shortliffe, E.H. (1984). An anal-ysis of physicians’ attitudes. In Buchanan, B. G. and Shortliffe, E. H., editors, Rule-Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project. Addison-Wesley, Reading, Mass.

ORM

Page 48: Aenorm 58

48 AENORM 58 January 2008

Micro-foundations are useful, up to a point

Economics consists of the trinity rationality, equilibrium and individualism. When looking for a characterization of the latter, the movie Monty Pyton’s Life of Brian is not a bad place to start. The main character Brian tries to get rid of a mass of people following him, believing he is the chosen one. When they have followed him up to his house, he shouts in utter despair: ‘You don’t need to follow me. You don’t need to follow anybody! You’ve got to think for yourselves. You’re all individuals!’ The followers echo in chore: ‘Yes, we’re all individuals!’. Brian: ‘You’re all different!’. ‘Yes, we are all different!’ reply the followers again.

David Hollanders

gained his degree in History and Econometrics in 2003. Momentarily he is a promovendus at the UvA and is soon finishing the Masterprogram of the Tinbergen-Institute.

Economic analysis rests on the assumption of methodological individualism, which is the sci-entific name for Brian’s exclamation that we are all individuals, who not only should think for ourselves but indeed do. A more formal defi-nition is given by Blaug: ‘

’ [as cited in Earl, 2005]. So, no matter what the macro-phenomena under study might be -be it unem-ployment, inflation, or non-market interactions as state-formation or war- economic analysis tries to explain it as the (equilibrium)outcome of the games rational individuals play. This is a diligent task, as lots of assumptions can be relaxed. Agents may be bounded rational, in-formation is never perfect and out-of-equilib-rium dynamics matters for the sustainability of the equilibrium, indeed there may be multiple equilibria. Understanding democratization, im-plementation of welfare programs or election outcomes is equivalent to providing micro-foundations. Many economists feel their model is incomplete without micro-foundations or as Rizvi comments on the period 1970-1985 ‘

’. These

approaches have been very fruitful according to some, imperialistic to others. Be that as it may, methodological individualism as such is seldom seen as a problematic aspect. What is more, it is not really seen as something that has to be argued for, it is simply taken as a given that can go without comment. I believe it can’t.To be sure, it is entirely obvious, a verum fac-tum est, that every macro-phenomenon in the end consists of individual behavior. Elections are indeed the outcome of individuals voting, unemployment is the macro-result of people of-

fering or hiring labour, and war has to start with some head of state refusing to take it anymore. So what I will definitely not argue is that the macro-phenomena as mentioned do not ulti-mately and solely consist of individual actions. But I will argue that it cannot be understood as such and that micro-foundations are sometimes even counterproductive. Sometimes individual behavior cannot be understood by contemplat-ing and modeling preferences, rationality or information, unless the social context is taken into account.

The strength of micro-economic foundations

The prisoner’s dilemma is the textbook exam-ple of rational, self-centered persons arriving at an inefficient equilibrium. In this 2x2 game two players have two strategies, to cooperate or not to cooperate. Cooperating is the strictly domi-nated strategy, hence the (unique) equilibrium is that both players do not cooperate. The re-sulting equilibrium of both not cooperating is however Pareto inefficient (both cooperating is better for both of them). This can be used to model such varying matters as arms race, advertisement and price competi-tion of firms. Both countries are both better off with no arms expenditure, but no matter what the other does you’d better arm yourself. This might be the end of the matter, if not we do ob-serve cooperation, both in experiments and the real world. The question then is: how come? How to understand the macro-phenomena of social cooperation? Some words about words.

Economics Economics

Page 49: Aenorm 58

49

I call this a macro-phenomenon as cooperation is strictu sensu supra-individual; one cannot cooperate alone. It is totally true that coopera-tion is the direct result of individual choices, but whether cooperation can be understood as such, is exactly the topic under discussion. One might want to argue that cooperation re-sults from altruism, social capital or simply a proper education. This might all be true, but the point of economic analysis is that social cooperation can perfectly be rationalized if the game is infinitely repeated. The folk theorem states that many equilibria are sustainable. This procedure, simple it may be in the present context, in essence constitutes the core of eco-nomics. Some outcome -social cooperation- is not understood as resulting from socialization or something the like, but is endogenously de-rived from a model, where all individuals are rational (and in this case also self-centered). I hold this to be a good example, as the relation between the assumptions and the outcome is intuitive, the longer the interaction horizon, the more people cooperate. That makes sense. Of course, the model is not claiming to be true or realistic in the immediate sense of the word. It cannot explain why people cooperate in a one-shot game, as people tend to do in experi-ments (Camerer, 2003). So it is not descriptive in every single case, and is without predictive power deserving the name. But it formalizes a plausible mechanism, which is the name of the game in economics. It does not tell us when cooperation occurs, but provides a way to un-derstand it, when it does.Cooperation in the prisoner’s dilemma might not be the best example of truly a macro- phe-nomenon. Now look at one that is: segregation in cities. This is as macro as it gets. There are a lot of individuals involved, together determin-ing the outcome, without the ability to influ-ence it. Discrimination, group identity and the reproduction of social classes are some among many factors that might have their role to play. However, this hardly counts as an explanation in economics. For a micro-underpinning one can turn to the work of Schelling (Batten, 2000). The model consists of a chess board, represent-ing a city. There are two classes of agents, in the original model the dividing line is black and white, but any two classes might be thought of. Preferences are a dislike for being sepa-rated from the own class, rather than a dislike for the other class. People have the following rule: ‘

’ These pref-erences are reconcilable with fully integrated cities (for example, free corners and each even diagonal occupied by one class, and each odd

by the other one). But a few permutations can give way to a pattern of unraveling and highly segregated city (a segregated chess-board that is). Key here is that the overarching occurrence of segregation can be derived from preferences and behavior at the micro-level, and that these are plausible in at least the following sense: people are not totally xenophobic, but do not want to be fully isolated either. This arguably is a satisfactory explanation since large segrega-tion is observed, while many people seem to dislike such segregation. Segregation is explainable as the collective out-come of maximizing individual behavior, where this segregation at the aggregate level is the unintended and perhaps unforeseen conse-quence of intended action. So far, no appeal has to be made to any social psychological mecha-nism. So far and, for economics, so good. Still, the model does not explain where the dividing line comes from, and why some dividing lines matter and others don’t. Individuals are said to have ‘

’. Economists in turn seem to have some difficulty in explaining those difficulties. Here might be a role to play for social psychol-ogy and sociology. But let’s first turn to another complication of micro-economic foundations.

The difficulty of micro-foundations

Often, agents are groups of individuals, not indi-viduals. For example firms or states are treated as if being, acting and thinking as an individual. Not so of course, as is explicitly addressed by agency-literature. CEO’s do not fully internal-ize the interests of share holders, and repre-sentatives are chosen by the voters, which is not the same as acting as voters choose. Two well known results - the Concordet paradox and Arrow’s impossibility theorem- mean further trouble. The Condorcet paradox may arise in pair-wise voting by three (rational) individuals over three alternatives. The three alternatives are and and all three individuals have both complete

and transitive preferences. These preferences are described, in the case of the first agent, by: . This means that he prefers , and in turn over (hence,

). The preferences of the second agent are described by and and finally and for the third individual. The voting mechanism consists of pair-wise voting. The re-sult is that the aggregated preferences are not transitive, as is chosen over over and over . So, though the individuals are rational, social preferences are not. Rationality of social preferences is not assured, even if individual preferences are. More general result is Arrow’s impossibility the-orem. It first formulates five reasonable prop-erties.

EconomicsEconomics

Page 50: Aenorm 58

50 AENORM 58 January 2008

The theorem states that there is no voting mech-anism that is transitive and satisfies the five conditions. One rather pessimistic consequence is that ‘

’ (Schotter).

Another consequence is, again, that rationality of all individuals involved is no guarantee that social preferences are rational.

The limitation of micro-foundations

Economics

"If agents do not think as economists think they should, all the more necessary that economists

think as the agents do."

Economics

Page 51: Aenorm 58

51

Conclusion

since modelers are very well aware of those facts when modeling. It says more, to para-phrase Friedman, about the ingenuity of the model-builders. The second necessary condi-tion for a good model is it’s internal consist-ency, in other words, the math should work out. Although the difficulty of these two tasks are not to be underestimated, as anyone who has tried it can testify, these are not the final, not even most important criteria to judge the quali-ties of a model. Whether the model is satisfying is the subjective answer to the question wheth-er the mechanism is convincing, plausible and intuitive. For example, the textbook monopoly model is convincing as the formalization of the notion that a monopolist raises prices above marginal costs. That the mechanism makes sense, does not mean the same can be said of the assumptions. In this case the assumptions are debatable: linear demand, constant mar-ginal costs, common knowledge of demand, profit-maximization as the only goal. So, the third criterion means that not the assumptions as such but the mechanism should be plausible. This criterion is subjective in nature, and sci-ence is then the art of persuasion. In economics to understand is to model. It means endogenizing something in a model of (bound-edly) rational agents. This is less restrictive as it seems, as lots of assumptions can and indeed are relaxed. Economists do not take every indi-vidual to be rational, all information to be per-fect or every equilibrium to be unique. This has resulted in many elegant models, with the (re-peated) prisoner’s dilemma, Schellings segre-gation model and the principal-agent literature as examples. As the paradox of Condorcet and Arrow’s impossibility theorem show, countries and firms cannot without problem be treated as individuals. This is to say that providing true micro-foundations is difficult and that models have to be understood under that proviso. It

is not only difficult, but sometimes stretching methodological individualism to it’s limits, sim-ply does not make sense, as the voter paradox shows. It runs the risk of concentrating on the first two criteria, internal consistency and con-gruence with the stylized facts (some at least), at the expense of propagating a mechanism that is not consistent with even casual observa-tion of –in this case- voting behavior. Yes, we are all individuals, and yes, at least to my own extra-scientific opinion, we should think for ourselves. But sometimes we simply do not. And if agents do not think as econo-mists think they should, all the more necessary that economists think as the agents do.

Literature

Batten, D.F. (2000). Discovering Artificial Economics, Westview Press.

Bendor, J., Diermeier, D. and Ting, M. (2003). A Behavioral Model of turnout, American Political Science Review, 97, 261-280.

Camerer, C.F. (2003). Behavioural studies of strategic thinking in games, TRENDS in cog-nitive Sciences, 7, 225-231.

Camerer, C.F., Loewenstein, G. and Prelec, D. (2004). Neuroeconomics: Why Economics Needs Brains, Scandinavian Journal of Economics, 106, 555-579.

Earl, P.E. (2005), Economics and psychology in the twenty-first century, Cambridge Journal of Economics, 29, 909-926.

Hendry, D. (1980). Econometrics-Alchemy or Science?, Economica, 47, 387-406.

Rizvi, S.A.T., Postwar neoclassical Microeconomics.

Schotter, A.F. (1994). Microeconomics, a mod-ern approach.

EconomicsEconomics

Page 52: Aenorm 58

52 AENORM 58 January 2008

Jan

Pete

r de

Brui

jn, I

nves

tmen

ts P

GGM

Consultancyof PGGM?

PGGM beschikt over zestig miljard euro aan

pensioenreserves. Om dat vermogen zo goed

mogelijk te beheren hebben we veel specia-

listen in dienst. Op het gebied van ALM neemt

PGGM een koppositie in. We hebben goed

zicht op de grote geldstromen en toetsen ons

beleid aan de wettelijke kaders. We houden

ook de houdbaarheid van het Nederlandse

pensioenstelsel nauwlettend in de gaten.

Brengen in kaart hoe PGGM daar het slimst

op kan inspelen. Dat alles doen we op het

snijvlak van zakelijk en maatschappelijk

belang. Daar kunnen we jouw hulp goed bij

gebruiken. Kijk op www.pggm.nl/werkenbij

Junior Actuarieel Medewerker36 uur. Bruto jaarsalaris min. € 33.031,-

tot max. € 52.047,-

Stagiair Actuariaat & ALM

Page 53: Aenorm 58

53

Introduction of the no-claim protector: Research of the consequences for the Dutch car-insurance market

During the start of 2007, a new product was introduced in the insurance market: the no-claim protector. The no-claim protector is mainly used in the car-insurance market and would lead to significant cost savings for the policyholders according to the insurers. In most car insurances, the premium depends on the number of claims and, in some cases, the total amount that has been claimed in the preceding years. The premium system is split up in several classes, each having an own reduction percentage on the basic-premium. This system is called a bonus-malus system.

Hein Harlaar

is a Master student in Actuarial Sciences at the University of Amsterdam. He started his study in 2004. In 2006 Hein joined Towers Perrin where he works at the Employee Benefits Services department. This article is a summary of his bachelor thesis.

People who do not submit any claims for se-veral years are rewarded with an increasing reduction in their premiums. However, when a policyholder submits a claim, these reductions strongly decrease for the upcoming years. The no-claim protector allows the insured to submit one claim every year without losing their ‘da-mage-free status’. Therefore it could be seen as an additional in-surance to prevent for premium increases when a claim is submitted. This article summarises a research of the efficiency of a bonus-malus sy-stem including the no-claim protector. The ef-ficiency is a criterion which describes in which extent the paid premiums match the expected claims. The goal of this research is to describe for which drivers the no-claim protector is a profitable investment. A comparison between a system with and without the no-claim protector is made. Furthermore, populations of policyholders with a (uniform) fixed claim frequency and populations with a stochastic claim-frequency will be taken into account. All scenarios are implemented in a model which simulates the development of the distribution of the policyholders over the diffe-rent premium classes. The considered bonus-malus system originates from the ANWB which

is one of the leading car insurance providers in the Netherlands.

Methodology

To analyse the efficiency and expected premi-ums in a bonus-malus system, the system is defined as a Markov chain. A Markov chain is a stochastic process of which the next state only depends on the present state and is indepen-dent from previous states. The range of n premium classes is described by the vector nC C C1( ,.... )

→= . According to the

property of memorylessness the transition pro-babilities from state Ci to Cj are independent of the way the insured ended up in state Ci. The transitions within a bonus-malus system are described by the transition matrix Tk = tij(k). The entries of Tk are equal to one when the in-sured transfers from Ci to Cj by submitting k claims. The probability of submitting k claims is

, with as the annual claim frequency. A distribution which is commonly used to model these probabilities is the Poisson distribution. (Lemaire, 1989)

p λ ek

( )!

−= (1)

The matrix provides the probability to transfer from class i to j within a period (pij). The elements of this matrix are determined as follows

Actuarial Sciences

Page 54: Aenorm 58

54 AENORM 58 January 2008

ij k ijk

p λ p λ t k i j n0

( ) ( ) * ( ), , 1,2,....,∞

=

= =∑ (2)

If the distribution of the insured over the pre-mium classes in the initial state is denoted as the vector l (0)

→ with fj the fraction of the popu-

lation which are in premium class j.

nl f f f1 2(0) ( , ,...., )→

= met n

j jj

f f1

0, 1=

≥ =∑ (3)

Then the distribution of the insured on a ran-dom time point is determined by the equation below l n l n P

l n P P

( ) ( 1) ( )

( 2) ( ) ( )

λ

λ λ

→ →

= − =

− (4)

A Markov chain ultimately converges to a stea-dy state which is independent of the distribution of the policyholders in the initial state. (Tijms, 2003) When the system is in the steady state the population remains (approximately) equally distributed over all premium classes.

t tl t l t Plim ( 1) lim ( )* ( )λ→ →

→∞ →∞+ = (5)

The steady state premium is the expected pre-mium income of the insurer in the steady state. The vector k

→ is defined as the vector with the

reduction on the base premium for each of the n premium classes. The steady state premium b corresponding with claim frequency follows by multiplying the basic-premium q with k

→ and

( )l→

∞ .

b λ l q( ) (140%,110%,.....,20%) ( )*→

= ∞ (6) The elasticity of the steady state premium with respect to the claim frequency is called the efficiency.

λ db λ d b λe λ

b λ d λ d λ( ) log ( )( )

( ) ( ) log= = (7)

Ideally the elasticity is equal to one, which me-ans that an increment of x% leads to a similar increment in the expected premiums. In this case the insured pays an annual premium which corresponds with the expected claim amounts.

Assumptions

Determining the steady state premium and effi-ciency requires some assumptions about sever-al parameters. Each claim amount is assumed to be equal. In practice, the car-insurance pre-

miums depend on several factors. The current value of the car, the year of construction and the residence of the insured all have some in-fluence on this amount. Because it’s difficult to model these factors, a uniform basic-premium is assumed. This premium is estimated by the average gross received premium as determined by ‘Centrum voor Verzekeringsstatistiek’ (CVS). As mentioned before, three different models for the claim frequencies are considered. The first is a fixed claim-frequency which is equal for all insured’s. Furthermore, we calculate with a population which distinguishes between good and bad drivers. The claim probability in this case leads to the equation below. The claim fre-quency will be a weighted average of the and .

kp P K k Good P GoodP K k Bad P Bad( | ) ( )

( | ) ( )= = +

= (8)

Finally a population with a stochastic claim frequency is considered, by taking a Gamma distributed random variable with parameters and . The Gamma distribution is common-

ly used to model claim frequencies (Lemaire, 1989). The assumed values of the parameters and are respectively 1.6313 and 16.1384 and are based on a maximum likelihood estimation of the claims in a Belgian insurance portfolio (Kaas et.al, 2001).Mathematically the derivation of the probability of submitting k claims goes as follows.

kλ λk

P K k Λ λ e !( | ) −= = = (9)

For k = 0 this leads to

Λ

λ λΛ

0

P K P K Λ λ f λ dΛ

e f dΛ E e

0( 0) ( 0 | ) ( )

(0) ( )

∞− −

= = = = =

=

∫∫

(10)

The expression above is equal to the moment-generating function (mgf) of in t = -1 . For a Gamma distributed random variable this function has a simple expression

β βα αΛ Λβ t β

M t M 1( ) ( ) , ( 1) ( )− +

= − = (10) It appears to be possible to express the claim-probabilities for all positive, integer values of k in the known parameters using the mgf. For all three scenarios mentioned above the steady state premium and the corresponding efficiency are calculated in bm-systems with and without the no-claim protector.

Actuarial Sciences

Werken bij AEGON

www.aegon.nl

Ik koos bewust voor AEGON. Vanwege de vrijheid, het ondernemerschap en de eigen ruimte. Je kunt hier echt zelf invulling geven aan je werk. Er wordt veel geïnvesteerd in mogelijkheden om ‘up to date’ te blijven en voorop te lopen in kennisontwikkeling.

AEGON krijgt hiervoor gemotiveerde medewerkers terug, die kansen zien en aangrijpen. Er wordt veel en op een leuke manier samengewerkt. Ik heb het hier prima naar mijn zin.

Amrit Jadnanansing, actuaris bij AEGON

‘ Ik wil een bijdrage leveren om AEGONtot dé verzekeraar te maken!’

Page 55: Aenorm 58

55

Werken bij AEGON

www.aegon.nl

Ik koos bewust voor AEGON. Vanwege de vrijheid, het ondernemerschap en de eigen ruimte. Je kunt hier echt zelf invulling geven aan je werk. Er wordt veel geïnvesteerd in mogelijkheden om ‘up to date’ te blijven en voorop te lopen in kennisontwikkeling.

AEGON krijgt hiervoor gemotiveerde medewerkers terug, die kansen zien en aangrijpen. Er wordt veel en op een leuke manier samengewerkt. Ik heb het hier prima naar mijn zin.

Amrit Jadnanansing, actuaris bij AEGON

‘ Ik wil een bijdrage leveren om AEGONtot dé verzekeraar te maken!’

Page 56: Aenorm 58

56 AENORM 58 January 2008

Results

Since the results of the three models did not differ significantly, the results of only one of the models are presented below.

Figure 1: The steady state premium

Figure 1 shows the development of the steady state premium for expected claim frequencies between 0 and 0.5. It considers the case of a population with good and bad drivers. The pink line displays the premium in the situati-on without a no-claim protector. The two lines intersect at a of 0.155. This means that for claim frequencies above this value the expected premium income in the steady state is smaller when the no-claim protector is applied.

Figure 2: Efficiency of the bonus-malus systems

In figure 2 the efficiency of both systems is compared. The efficiency is an important measure for the fairness of bm-systems. The graph shows that the system without no-claim protector (pink line) is much more efficient for claim frequencies below 0.35 and reaches the ideal efficiency 1 for a of 0.18.

Conclusions

When the efficiency of a bonus-malussystem is low, the paid premiums do not correspond with

the insured’s expected claim amounts. In this case there is no fair price, the good drivers sub-sidize the bad ones. In practice the average an-nual claim frequency lies between the 0.05 and 0.2 (Kaas et.al, 2001). Within this domain the efficiency shows a strong upward trend when the no-claim protector is not applied. However, when the no-claim protector is applied, the effi-ciency remains close to zero. This low efficiency is caused by the fact that in the steady state, over 90% of the drivers ends up in the highest premium class so almost all insured’s pays the same premium. This is not a favorable situation, as well for the insurer as for the policyholders. The question is, to what extent the good drivers want to switch to another insurer in this case. Empirical research showed that annual about 12.5% of the insured switches to another insu-rer (Barone & Bella, 2005). An unfair price and disputes between insured and insurer appear to be the main motives to consider a switch. However for insured’s with a no-claim protector the cost of switching could be relatively high. This is because the new insurer bases its ini-tial premium on the true amount of submitted claims. Summarizing, the introduction of the no-claim protector has great disadvantages for the effi-ciency in a bonus-malus system because almost all insured’s will end up in the same premium class. Only for drivers who submit significantly more claims than average this is a profitable situation. In spite of this, the increased swit-ching-cost will prevent the insurer for losing all the good risks.

References

Anton, C., Camarero, C. and Carrero, M. (2005). Analysing firms’ failures as determinants of consumer switching intentions. University of Valladolid, Spain.

ANWB (2007). Homepage (www.anwb.nl), March 2007.

Barone, G. and Bella M. (2004). Price-elasticity based customersegmentation in the Italian auto insurance market. Journal of Targeting, Measurement and Analysis for Marketing, 13- 1, 21–31.

Centrum voor verzekeringsstatistiek (2006). Statistieken schadeverzekeringen. (http://www.verzekeraars.org/smartsite.dws?id=43&mainpage=3145)

Kaas, R., Goovaarts, M., Dheane, J. and Denuit, M. (2001). Modern actuarial risk theory. Kluwer adademic publishers.

Lemaire J. (1989). Auto-Mobile Insurance: Actuarial Models. Dordrecht: Kluwer.

Actuarial Sciences Actuarial Sciences

Page 57: Aenorm 58

57

Conflicting interests in supply chain optimization

An elementary aspect of production planning is the relation between setup costs for starting a production run and the costs for holding inventory. The problem is to find the right balance between these costs. One can link two of these problems in a supply chain where one producer supplies an intermediate product for the next producer. In such a setting you also need a balance between the supplier’s and customer’s costs. We can still handle this multi-level production problem if we are allowed to see it as one optimization problem. But what if the supplier and customer have different goals in mind? In the following we investigate how to handle this situation of conflict.

Reinder Lok

studied Econometrics at the University of Groningen. From February 2003 to January 2007 he did research at the university of Maastricht. Last June he successfully defended his PhD thesis ‘Auction Mechanisms in Supply Chain Optimization’. Currently he is employed by Statistics Netherlands in Heerlen. The author can be contacted via e-mail: [email protected]

The problem

Let us focus on the setup costs and inventory holding costs. According to setup costs, one prefers to minimize the number of production runs. However, this might result in high inven-tory holding costs if the time between produ-cing and supplying the product gets long. The problem is therefore to find the right balance between setup and holding costs. This so-cal-led lot-sizing problem is already computatio-nally hard in discrete-time settings where ca-pacity restrictions hold. The problem gets even more complicated in supply chain settings. In that setting we not only need to find the right balance between setup costs and inventory hol-ding costs, but also between the costs at the different production levels.

Many researchers considered the multi-level lot-sizing problem and the complexity of finding the optimal (lowest cost) solution. However, this approach assumes that two crucial condi-tions hold.

- One, there is one authority that is responsible for the production planning.

- Two, the authority has all information that is needed for finding the optimal solution.

In our competitive and specialized world, these

conditions are easily violated.

In the following we discuss the way how we can handle these situations. We start with the problem of conflicting interests and the need to handle this problem. Then we explain the prin-ciples from the field of mechanism design that should be applied. However, we will see that a straightforward application of mechanisms is too expensive. Therefore, we present the com-binatorial auction as a practical implementation where the planning problem and the procure-ment problem are integrated.

Conflicting interests

In operational research we often think in terms of finding the optimal solution. But in general, there might be no agreement on what is op-timal. On the contrary, in economic terms we are all competing for the same scarce goods. So instead of working together, we might be fighting to reach our own goals.

These conflicting interests play a role in the production setting in supply chains. There we want to implement the cheapest solution for the whole chain, but we also know that the links within the chain are aiming for there own profits. Unfortunately, most companies wrongly assume that behaving in their own interest is also in the interest of the supply chain, as Narayanan and Raman (2004) pointed out:“Every firm behaves in ways that maximize its own interests, but companies assume, wrongly, that when they do so, they also maximize the supply chain’s interests. In this mistaken view, the quest for individual benefit leads to collecti-ve good, as Adam Smith argued about markets more than two centuries ago. Supply chains are expected to work efficiently without interferen-

ORM

Page 58: Aenorm 58

58 AENORM 58 January 2008

ce, as if guided by Smith’s invisible hand. But our research over the last ten years shows that executives have assumed too much. We found, in more than 50 supply chains we studied, that companies often didn’t act in ways that maxi-mized the network’s profits; consequently, the supply chains performed poorly.”

It is the problem of selfish behaviour of supply chain partners that we discuss in the following.

Mechanism design

There are ways to overcome the incentive pro-blem in supply chain planning. The idea is to reshape the circumstances in such a way that the selfish behaviour of links is indeed the best for the chain. These ways are studied in the field of mechanism design. Mechanism design

is more or less a generalization of the field of game theory. Where game theory studies the behaviour of players in a certain situation, mechanism design handles the matter how to shape the situation such that the rational be-haviour of players yields a desired outcome. This field of research is an important part of contemporary economic research. In December 2007 even the Nobel price in economics was awarded to researchers that initiated, refined and applied the theory of mechanism design on the allocation of goods.

We apply the theories of mechanism design to production planning in supply chains. The goal is to shape the business rules in a supply chain such that individual interests and supply chain’s interest are reconciled. Assume that companies are profit maximizers. Then, companies are wil-ling to change their behaviour if that increases their individual profit. Shortly, good behaviour should be rewarded, inconvenient behaviour should be fined. However, business is volunta-rily. There is no government that is willing to act as a police officer in production planning. Businesses should shape the situation themsel-ves. Rewards and fines should be a natural con-sequence of the way they interact. Narayanan and Raman (2004) showed that this is really possible: “In recent years, many companies have assu-med that supply costs are more or less fixed and have fought with suppliers for a bigger share

of the pie. (..) Our research, however, shows that a company can increase the size of the pie itself by aligning partners’ incentives. (..) If the companies work together to efficiently deliver goods and services to consumers, they will all win. If they don’t, they will all lose to another supply chain.”

Revelation principle

A mechanism is a game that is designed in such a way that its equilibrium yields the outcome wanted by the mechanism designer. Usually in mechanism design it is assumed that the only action of the participants is to reveal their pre-ferences to a neutral officer. The idea behind this assumption is that the neutral officer will play the game for all players according to their preferences and the rules of the game. This

idea is called the revelation principle.

In our setting of a supply chain, this gives the following steps:

- First, the two links of the chain reveal their costs and capacities to the neutral officer.

- Second, the multi-level lot-sizing problem is solved by the officer.

- Third, production takes place according to the optimal planning found by the officer.

- Finally, goods and money are transferred ac-cording to the optimal plan and the payment rules of the mechanism.

(Note that in fact the neutral officer is just the optimization tool used for finding the optimal planning.)

The payment rule

The first step is a very crucial one. Only when the participants tell the truth, only then, the following steps will lead to the optimal pro-duction planning. The question is whether the participants have an incentive to tell the truth. Actually, this depends on the payment rules of the last step.

The payment rules of the so-called VCG-me-chanism will do this job. The idea of these pay-ments is that you pay an amount equal to your own alleged benefit from participating, and re-

ORM

"Solutions from the field of mechanism design are not suitable for the situation in which there

is only one supplier and one buyer."

Page 59: Aenorm 58

59

ceive an amount equal to the alleged benefit to the whole system resulting from your participa-tion. As a result, your net benefit equals your own true benefit from participation plus the alleged benefit for the others. This is exactly what is maximized if you tell the truth. So, the VCG-mechanism gives the incentive to tell the truth.

Let us see what happens in the supply chain setting. Suppose that current supply chain plan-ning results in a cost of 1000 euro for the sup-plier and 500 euro for the buyer. Furthermore, the optimal planning results in costs of respec-tively 700 and 600 euro. Now they apply the VCG-mechanism. First consider the payment of the supplier. The first part of the payment is equal to the sup-plier’s benefit, which is 1000 – 700 = 300 euro, to be paid. The second part is the reduction in overall costs (200), to be received. So the sup-plier pays 100 euro. As he had a cost reduc-tion of 300 euro, the net benefit for him is 200 euro. The buyer faces an increase in costs, so the fist part of the payment is a receipt of 600 – 500 = 100 euro. The second part of the payment is again the reduction in overall costs, 200 euro. So the buyer receives in total 300 euro. Together with her cost increase of 100 euro, she has a net benefit of 200 euro.

Impossibility

The example shows that both links in the supply chain have a net benefit equal to the total cost reduction of 200 euro. So, either party claims the whole budget that emerged from using the mechanism at all. As a consequence, there is a budget deficit of 200 euro (one player pays 100 euro, but the other receives 300). This is not a coincidence. In fact, the impossibility result of Myerson and Satterthwaite (1983) says that it is impossible to have budget balance in this setting. The problem is that there are only two participants that are both essential for achie-ving any savings at all.

To overcome the impossibility result, we intro-duce competition in this setting. As before, we assume that there is only one buyer facing a lot-sizing problem. But now we assume that the buyer can get its input from different suppli-ers competing for the order. Furthermore, the choice between these suppliers will be made by using a procurement auction mechanism using the VCG-payments. This combines the choice of a supplier and the production planning, in order to be able to coordinate the planning at the two levels of the supply chain.

Combinatorial auction

In my thesis (Lok, 2007) we introduced a com-binatorial auction for the lot-sizing setting in a supply chain. The combinatorial auction is ap-propriate for this setting, i.e. it fits the cost structure implied by the planning problems fa-ced by the bidders. The combinatorial auction allows bidders to fully incorporate their cost structure, in particular balancing setup and hol-ding costs. As a consequence, a combinatorial auction using the rules of the VCG mechanism will yield the cost minimizing production plan of the (multi-level) lot-sizing problem. However, even though the combinatorial auction allows bidders to express their costs appropriately in their bids, it might be better for the auctioneer to use less sophisticated auction mechanisms. Fortunately, on average the combinatorial is by far the best solution. So a supply chain that needs to improve its overall planning, should consider the use of the combinatorial auction. This mechanism is pre-eminently the solution for matching the lot-sizing problems at both le-vels of the chain.

Concluding

Multi-level lot-sizing becomes a difficult pro-blem when different companies are involved. In these situations the problem exists that each company aims for its own profit, potentially hin-dering the optimal planning of the supply chain. Solutions from the field of mechanism design are not suitable for the situation in which there is only one supplier and one buyer. The pro-blem is that then both are a kind of monopo-list, giving them the power to block improved planning. Introducing multiple suppliers that compete for the order eliminates the problem. Moreover, the combinatorial auction appears to be a format that fits the cost structure of lot-sizing. It is on average also the most preferred mechanism for the buyer.

Literature

Lok, R.B. (2007). Auction Mechanisms in Supply Chain Optimization. Universitaire Pers Maastricht, ISBN 978-90-5278-638-4, di-gitally available at the University Library of Universiteit Maastricht .

Myerson, R.B. and Satterthwaite, M.A. (1983). Efficient mechanisms for bilateral trading. Journal of Economic Theory, 29(2), 265–281.

Narayanan, V.G. and Raman, A. (2004). Aligning incentives in supply chains. Harvard Business Review, November. DOI: 10.1225/R0411F.

ORMORM

Page 60: Aenorm 58

60 AENORM 58 January 2008

DOING WELL BY DOING RIGHT

Risk Management is een geïntegreerd onderdeel van ING's business. Om deze reden

biedt ING uitdagende carrières aan op het gebied van Market Risk, Credit Risk,

Insurance Risk en Operational Risk. Carrières waarbij professionals hun kwantitatieve,

project management en algemene management vaardigheden kunnen ontwikkelen

vanuit een Risk Management functie.

We nodigen startende professionals uit die een Master's of PhD diploma hebben

(of aan het afstuderen zijn) in een kwantitatieve richting zoals Econometrie,

Actuariële Wetenschappen of Wiskunde, om bij Nationale-Nederlanden of een ander

ING onderdeel hun veelbelovende carrière te beginnen.

Meer informatie:

Keesjan Bongaertz,

Recruiter Finance & Risk Management, ING Divisie Intermediair, [email protected]

telefoon (070) 513 66 87, Mobiel 06 303 143 82.

Managing Risks@ ING: Insurance, Banking & Asset Management

WWW.ING.NL / WERKEN

INSURANCE • BANKING • ASSET MANAGEMENT @#

Nationale-Nederlanden is onderdeel van ING.

ING is een internationaal, financieel concern dat

actief is in ruim zestig landen. Wereldwijd biedt ING

producten en diensten op het gebied van bankieren,

verzekeren en vermogensbeheer. Altijd klantgericht

en pro-actief in hun aanpak: dat is wat onze

115.000 medewerkers wereldwijd kenmerkt en dat

is ook wat ING maakt tot een uitstekende plek om

een professionele toekomst op te bouwen.

NATIONALE-NEDERLANDEN

Page 61: Aenorm 58

61

DOING WELL BY DOING RIGHT

Risk Management is een geïntegreerd onderdeel van ING's business. Om deze reden

biedt ING uitdagende carrières aan op het gebied van Market Risk, Credit Risk,

Insurance Risk en Operational Risk. Carrières waarbij professionals hun kwantitatieve,

project management en algemene management vaardigheden kunnen ontwikkelen

vanuit een Risk Management functie.

We nodigen startende professionals uit die een Master's of PhD diploma hebben

(of aan het afstuderen zijn) in een kwantitatieve richting zoals Econometrie,

Actuariële Wetenschappen of Wiskunde, om bij Nationale-Nederlanden of een ander

ING onderdeel hun veelbelovende carrière te beginnen.

Meer informatie:

Keesjan Bongaertz,

Recruiter Finance & Risk Management, ING Divisie Intermediair, [email protected]

telefoon (070) 513 66 87, Mobiel 06 303 143 82.

Managing Risks@ ING: Insurance, Banking & Asset Management

WWW.ING.NL /WERKEN

INSURANCE • BANKING • ASSET MANAGEMENT @#

Nationale-Nederlanden is onderdeel van ING.

ING is een internationaal, financieel concern dat

actief is in ruim zestig landen. Wereldwijd biedt ING

producten en diensten op het gebied van bankieren,

verzekeren en vermogensbeheer. Altijd klantgericht

en pro-actief in hun aanpak: dat is wat onze

115.000 medewerkers wereldwijd kenmerkt en dat

is ook wat ING maakt tot een uitstekende plek om

een professionele toekomst op te bouwen.

NATIONALE-NEDERLANDEN

Coalitional and Strategic Bargaining: the Relevance of Structure

How relevant is the structure of a bargaining situation to its outcome? Will the order of the moves or the degree to which the actions of the bargaining parties are restricted affect what will happen? If structure turns out to be highly relevant, one may wonder whether our highly stylized game-theoretic bargaining models make sense at all. If structure has little relevance, then coalitional (cooperative) models may be preferred to strategic (non-cooperative) models. The fact of the matter is, however, that at the present moment we do not have an unambiguous answer to this question, theoretically nor empirically. In any case, the answer is likely to depend on the details of the bargaining situation. To illustrate this let us consider two situations that only differ in the properties of the disagreement point.

Adrian de Groot Ruiz

is a PhD candidate at the Center for Experimental Economics and Political Decision Making (Amsterdam School of Economics). He holds a Bachelor in Liberal Arts from University College Utrecht and a Master in Econometrics from the University of Amsterdam. This article is based on his master thesis, supervised by Roald Ramer and Arthur Schram.

Consider a legislature that has to revise the budget for an ongoing war and consists of three factions: doves, moderates and hawks. Doves ideally lower the budget of the war by 20 billi-on, moderates prefer not to change the budget and hawks would like to increase the budget by 40 billion. They can also choose to stop the war and, in fact, if the legislature does not manage to find an agreement they will be forced to stop the war (‘retreat’). None of the factions holds a majority in the legislature and any coalition of two does.

Figure 1. Preferences of three parties facing a revision of a war budget

In the first situation, retreating is very unat-tractive and all factions prefer any revision to retreating. (See figure 1) In this case, Duncan

Black’s Median Voter Theorem (1948) tells us that the moderates will get their way. The rea-son is that a revision of 0 is the unique (strong) core element, as it will beat any revision in a direct vote: doves and moderates prefer 0 to all increases in the budget and moderates and hawks prefer 0 to all decreases in the budget. In the second situation, retreating can be at-tractive. Each faction prefers some revisions to retreating and retreating to other revisions. If the budget is lowered by too large amount par-ties believe they cannot win the war, and if the budget is increased by a too large amount par-ties believe it is too expensive. In this case the Median Voter Theorem does not apply. In fact, if hawks and doves prefer retreating to a zero revision and doves and moderates prefer some revisions to retreating, then the core is empty.What about structure? In the first situation, the precise structure of the legislative process does not seem to be very important. Having a uni-que strong core element is a very strong result and several authors obtained core-equivalence in their non-cooperative model of a Median Voter Setting (Baron 1991, Banks and Duggan 2006). In the second situation, however, one might expect a larger degree of randomness in-herent to the situation. McKelvey (1976; 1979) and Schofield (1978) showed that if the core is empty, under general conditions, all outcomes can be supported by some agenda-institution. This suggests that structure completely deter-mines the outcome if the core is empty. Still, as doves and hawks can only coordinate on the status quo, one might still expect the modera-tes to wield the highest bargaining power. This seems to make some outcomes (say close to

Econometrics

Page 62: Aenorm 58

62 AENORM 58 January 2008

0) more likely than others (say close to 40). In any case, structure can be expected to play an important role and it seems wise to make a non-cooperative model in this case.However, several questions ought to be answe-red before such an approach can be taken. Can we solve a game that truthfully models all the fine details of bargaining? Well, probably not. Is a stylized model, in the spirit of Baron and Ferejohn (1989), then justified? Do different stylized non-cooperative models yield the same result? Suppose we can justify a stylized model as differences in structure are found not to mat-ter much; would this then not also imply that we can say something after all without consi-dering a non-cooperative model? Furthermore, we need to know whether real human agents respond in the same way to differences in the bargaining procedure as our models predict. In sum, we ought to know how the structure of bargaining affect its outcome, theoretically and empirically.Theoretically, this question relates to the Nash program, which tries to establish the link bet-ween cooperative game theory (which does not take structure into account) and non-coopera-tive game theory (which explicitly models struc-ture). However, to make this link one should define what a coalitional game is exactly a mo-del of. Interpreting a coalitional game as a mo-del for all types of bargaining situations seems problematic, as we know that the bargaining structure does have some impact – in an ulti-matum game it does matter which party makes the offer. Some argue instead that cooperative games are a model for ‘unstructured’ bargai-ning situations. However, truly unstructured bargaining does not exist. Moreover, the non-cooperative games to which coalitional solution concepts have been linked to in the Nash pro-gram, such as those in Binmore et al. (1986) or Maskin (1999), surely have structure. We argue that a coalitional game should not be interpreted as a model of the bargaining situ-ation as a whole, but rather as a model of one aspect of the bargaining situation, the bargai-ning problem. This allows us to systematically investigate the impact of structure and reinter-pret the Nash program as a theoretical inquiry into the relevance of structure and the com-mon factors different bargaining situations may share. This theoretical Nash program, moreover, should be complemented with an empirical counterpart. We want to know which aspects of structure are key in influencing human behavior and whether they correspond with the aspects that game-theory would single out. Hence, we need a systematic study of the effects of struc-ture on human bargaining behavior.The study here described aimed to be a first initiative and obviously must leave many ques-tions unanswered. We focused on the relevance

of the degree of bargaining structure. In parti-cular, we wanted to know whether a lower de-gree of structure increases the predictive power of coalitional solution concepts and whether it has an influence on the relative bargaining po-wer of the players. Concretely, after defining a conceptual framework, we looked at two bar-gaining situations with the same bargaining problem. One situation was highly structured and the other had a low degree of structure. Theoretically, we analyzed the coalitional game corresponding to the bargaining problem and the strategic games corresponding to highly structured bargaining situation. Experimentally, we compared the two bargaining situations in separate treatments.

Coalitional and Strategic Bargaining

Conceptually, we can distill from a bargaining situation a bargaining problem and a bargai-ning structure. The bargaining problem is the outcome set, the disagreement point, the set of players, their preferences and the set of win-ning coalitions. The set of winning coalitions specifies which coalitions of players can decide on the outcome for all players – for a simple majority rule, this set will consist of all majority coalitions. The bargaining structure will consist of the actions each player can do at which time and the consequences of that. A bargaining problem and a bargaining structure are consi-stent with each other if the decision rules in the bargaining structure are in accordance with the winning coalitions of the bargaining problem. At this point we can identify a coalitional game with the bargaining problem rather than the bargaining situation as a whole. Solution con-cepts of the coalitional game are then proper-ties of the bargaining problem and a relevant benchmark; whether a particular coalitional solution concept says something about the out-come of the bargaining situation can depend on the structure and is a theoretical and empirical question.We can identify a strategic game with the bar-gaining situation as a whole. This allows us to game-theoretically compare different bargai-ning situations that share the same bargaining problem. However, strategic games have a seri-ous limitation: a bargaining situation may have a structure that is too complex to model as a strategic game – either because we cannot de-fine a game tree or because we cannot solve the game. Finally, using the concept of a bargaining pro-blem also allows us to empirically test the im-pact of structure by comparing different bargai-ning situations that share the same bargaining problem in laboratory-experiments.

Econometrics

Page 63: Aenorm 58

63

Start je carrière bij Delta Lloyd en geen dag zal hetzelfde zijn. De ontwikkelingen gaan namelijk snel. Na de succesvolle fusie met Nuts/OHRA volgde de joint venture met ABN AMRO Verzekeringen. En Delta Lloyd gaat verder. We zijn een verzekeraar die risico’s niet uit de weg gaat. Daar zoeken we talentvolle nieuwkomers bij die zelf ook verander-ingen durven creëren. Dus, wil jij als (aankomend) actuaris/econometrist bij Delta Lloyd aan de slag? Stuur dan een e-mail met CV aan [email protected].

Page 64: Aenorm 58

64 AENORM 58 January 2008

Two bargaining situations

The two bargaining situations mentioned in the introduction motivate the following bargaining problem. The outcome set X is the union bet-ween a line and a disjoint point , which is also the disagreement point. We have three players, a dove, a moderate and a hawk. Each of the three players will have an ideal point on the line, zi, which gives her a pay-off of 1. The ideal point of the dove is – a, the ideal point of the moderate is 0 and the ideal point of the hawk is b. Without loss of generality we can assume b to be at least as large as a. The pay-off of outcomes on the line of each player decreases linearly with the distance to their ideal point. Hence, the pay-off function of player i for outco-mes on the line, which we will call ‘agreements’, is ui = 1 - |zi - x|. At the disagreement point all players receive a pay-off of 0. A coalition is winning if and only if it consists of at least two players. This simple problem is interesting, be-cause it has no obvious outcome, but one sus-pects that the moderate has some advantage above the others.Now we can consider two bargaining situations with this same bargaining outcome. In the first situation, (High structure), the agenda setting procedure is regulated. The bargaining consists of N rounds. In every round one of the players is randomly chosen to make a proposal, which can be any element in X (also ), after which the other two players cast a vote. If at least one player votes ‘YES’, the game ends and the pro-posal is the outcome. If both players vote ‘NO’ the bargaining continues to the next round. In the final round, if both players vote ‘NO’ the status quo is the outcome. This situation re-sembles a stylized parliamentary procedure.In the second situation, (Low structure), agen-da setting is not regulated and players receive T units of time to reach an agreement. At any time between 0 and T, any player can make a proposal, which remains on the table until the sane player makes a new proposal. (Hence, the-re can be three proposals on the table, one for each player.) Moreover, at any time can a player accept the proposal of another player, in which case the bargaining ends and the proposal is the outcome. If after time T, no proposal has been accepted, the disagreement point is the outcome. This situation is a more informal kind of decision making, say as it happens in the corridors and back-rooms.

Theory

The bargaining problem can be modeled as a coalitional game. We look at the core and the uncovered set of the coalitional game. The core is one of the few generally accepted coopera-tive solution concepts and the uncovered set has recently attracted significant empirical sup-

port (Bianco et al. 2006). To define these con-cepts it is useful to define a dominance relation and a covering relation for this game first. We say that an outcome x (majority) dominates an outcome y if two players strictly prefer x to y. We say that an outcome x covers an outcome y if x dominates y and all points that dominate x also dominate y. The core is the set of outco-mes that are not dominated by any other point and the uncovered set is the set of outcomes that are not covered by any other point.It turns out that there are three main cases, that depend on the value of a. If a is small (a < 1) and the disagreement point is relatively unattractive, the median preference (0) is the core and the uncovered set. If a is large (a>2) and the disagreement point is relatively very attractive, then the disagreement point is the core and the uncovered set. If a is medium (1<a<2), then the core is empty and the unco-vered set consists of the median preference, the disagreement point and the point closest to the median preference where the dove has zero pay-off: {0, , 1-a}. If a = b, then the uncovered set also contains a - 1. These three cases are outlined in figure 2. Hence, without considering the structure, one can would say that the median player has complete bargaining power when a is small and looses bargaining power when a becomes larger than 1.

The High structure situation can be modeled as a strategic game. In particular, this bargaining situation corresponds with a finite-round, clo-sed agenda-setting rule, of the popular Baron-Ferejohn (1989) model of legislative bargai-ning. We looked at a refinement of the Subgame Perfect Nash Equilibrium (from now just Nash equilibrium), which can be found by backward induction and which is unique. The results are that when the core exists, the Nash equilibrium outcome will tend to the core element as the number of rounds become large. If the core is empty, then the Nash equilibrium outcome be-comes much less consistent, as it will depend on the exact values of and , as well as on the number of rounds and which player is chosen to make the first proposal. In any case, also this model predicts absolute bargaining power

Econometrics Econometrics

Page 65: Aenorm 58

65

for the moderate for small a, and a loss of her bargaining power when a becomes larger than 1. However, the Nash equilibrium outcome will typically not lie in the uncovered set. So, it is striking that when the core is nonempty, both the uncovered set and the Nash Equilibrium outcome are equal to the core, whereas when the core is empty the Nash Equilibrium typically does not lie in the uncovered set. Hence, from a game-theoretic point of view this means that when the core is empty the bargaining structure is important: it can move the outcome away from those outcomes which one could expect without considering structure. Situation B has too little structure to be proper-ly modeled as a strategic game. As agents can make decisions in a continuous time, one cannot define clear decision nodes. Although continu-ous time games are being explored, the current techniques do not seem to allow the modeling of this bargaining situation. This means that we cannot compare the role of structure between these two situations game-theoretically.

Experiment

To investigate the role of structure empirically, we ran an experiment with two treatments: a

treatment in which subjects play the Low structure bargaining situation and a tre-atment in which subjects play the High struc-ture bargaining situation with 10 rounds. For the experiment, the outcome line was discre-

tized in units of 0.05. Within each treatments we look at 12 parameter pairs for and – six with a small (core is the moderate’s ideal point) and six with a medium (core empty). For each treatment we ran three sessions, each of which consisted of two matching groups of 6-9 subjects. Hence, we had 12 independent data points, which are few, but sufficient to perform the proper non-parametric tests. Subjects were paid and earned on average (on top of a show-up fee) 11.70 euros. Our main empirical result is that structure mat-ters. In the first place, the moderate player does better in Low than in High. In both treat-ments, the median agreement frequency was 100% (all outcomes were points on the line) for a small and 83% for a medium. However, the median distance between agreements and the moderate ideal point is smaller in Low (0.24) than in High (0.36), a significant difference at a 10% level for a one-tailed test. (See table 1.)In the second place, for medium a the unco-vered set predicts better in Low than in High. Whereas the uncovered set attracts 49% of all outcomes in Low, it attracts 32% of all outco-mes in High. (By attract we mean how many outcomes fell in the 0.05 neighborhood of points in the uncovered set). This is significant at the 10% in a two-tailed test. (See table 2.)Our explanation for the difference between the Low and High structure treatments is that the Low structure treatment allows for active ne-gotiation, while in the High structure treatment

Median f Median D

Low High Difference Low High Difference

a < 1 100% 100% 0% 0.14 0.21 -0.07

a > 1 83% 83% 0% 0.36 0.46 -0.10

overall 92% 92% 0% 0.24 0.31 -0.07*

% of outcomes attracted by the 0.05-neighbourhood of a point/set

Point/Set Low High Dif Low-High

a<1 0% 0% 0%

0 36% 23% 13%*

-0.5a 22% 40% -18%*

a>1 UCS 17% 17% 0%

0 10% 5% 5%*

1 - a 10% 3% 7%

a - 1 17% 6% 11%

Total 49% 32% 17%*

SPNE 22%

-0.5a 17% 17% 0%

Econometrics

Page 66: Aenorm 58

66 AENORM 58 January 2008

subjects must wait for their turn (if it comes at all). This can increase the possibilities for the moderate player to exploit her bargaining position in Low. Moreover, it can enhance the predictive power of the uncovered set in Low, as it is based on the idea that people make and consider several proposals.

Conclusion

Does structure matter? Theory could not provi-de an answer, as a low-structure situation does not allow for a non-cooperative description. Experiments did give us an answer. The degree of structure indeed matters. In particular, our results seem to suggest two conclusions. First, players who have a good bargaining position are better off in a less-structured bargaining situation. Second, solution concepts based on the coalitional game provide more information on the bargaining outcome when the bargai-ning situation has less structure.

References

Banks, J. and Duggan J. (2006). A General Bargaining Model of Legislative Policy-making,

, (1), 49-85.

Baron, D. and Ferejohn, J. (1989). Bargaining in Legislatures,

, 83(4), 1181-206.

Baron, D. (1991). A Spatial Bargaining Theory of Government Formation in Parliamentary Systems, , 85(1), 137-64.

Bianco, W.T., Lynch, M.S., Miller, G.J. and Sened, I. (2006). A Theory Waiting to Be Discovered and Used? A Reanalysis of Canonical Experiments on Majority-Rule Decision Making, , 68 (4), 838-51.

Binmore, K., Rubinstein, A., Wolinksy, A. (1986). The Nash Bargaining Solution in Economic Modelling, , 17(2), 176-188.

Black, D. (1948). The Median Voter Theorem,

, 56(1), 23-34.

Maskin, E. (1999). Nash Equilibrium and Welfare , 66,

23-38.

McKelvey, R.D. (1976). Intransitivities in Multidimensional Voting Models and Some Implications for Agenda Control,

, 12(3), 472-82.

McKelvey, R.D. (1979). General Conditions for Global Intransitivities in Formal Voting Models,

, 47(5), 1085-12.

Schofield, N. (1978). Instability of Simple Dynamic Games,

, 45(3), 575-94.

Econometrics

"Players who have a good bargaining position are better off in a less-structured bargaining

situation"

Page 67: Aenorm 58

67

ADV

Page 68: Aenorm 58

68 AENORM 58 January 2008

Puzzle

Puzzle

As usual, we challenge you to solve these fresh new puzzles for your own mathematical pleasu-re. But first we present the solutions of the puz-zles of the previous Aenorm.

A popular sport among econometricians

All of the holes are of a length which is dividible by 25, so it is obvious that the length of the strokes should be a multiple of 25. By some simple reasoning it’s best to use a driver of 150 yards and an approach of 125 yards to com-plete the nine hole course. By doing so, you can run the course in twenty-six shots.

Uncle Sam’s Fob Chain

To solve this puzzle it’s important to notice that the coins and the eagle can displayed on two sides, which makes two different combinations. This way the first coin can be displayed in 10 ways, the 2nd in 8 ways, the third in 6 ways and the 4th in 4 ways. This gives 10*8*6*4 = 3,840 combinations. By changing the order of the coins, twenty-four (4!) different strings of coins are to be made. So the correct answer to the puzzle is 3,840 times 24 = 92,160.

Unfortunately, we only recieved a correct sub-mission for the first puzzle. However, since S. Hoving managed to beat Sam Loyd in finding a new correct answer to the first puzzle he still deserves to be the winner of the book token. Congratulations!

The new puzzles for this edition are:

Domestic Complications

This puzzle involves the ordinary affairs of life and seems to be much harder for a mathema-tician to solve than it is for a housewife. Smith, Jones and Brown were great friends. After Brown’s wife died, his niece kept house for him. Smith was also a widower and lived with his daughter. When Jones got married, he and his wife suggested that they all live together. Each one of them was to contribute $25.00 monthly for household expenses and what remained at the end of the month was to be equally divided. The first month’s expenses were $92.00. When the remainder was distributed each recieved an even number of dollars without fractions. How much money did they receive and why?

Yacht Race

Below a race is sketched between two Yacht’s on a triangular course from buoy A to B to C, then back to A again. This puzzle is about three people on the winning yacht who tried to record the speed of this boat, but became seasick be-fore reaching the finish. Therefore, the first one was only able to observe that the yacht sailed the first three-quarters of the race in four and a half hour. The second one noted only that it did the final three-quarters in four and a half hour. The third one observed that the middle of the race (B to C) took ten minutes longer than the first leg. Assuming that the buoys mark an equilaterial triangle and that the boat had a constant speed on each leg, can you tell how long it took the yach to finish the race?

Solutions

Solutions to the two puzzles above can be sub-mitted upto March 1st. You can hand them in in the VSAE room, room C6.06, mail them to [email protected] or send them to VSAE, for the at-tention of Aenorm puzzle 58, Roetersstraat 11, 1018 WB Amsterdam, Holland. Among the cor-rect submissions, one book token will be given. Solutions can be submitted both in English as in Dutch.

Page 69: Aenorm 58

69

Universityof Amsterdam

Free UniversityAmsterdam

And that’s the end of 2007. I assume every-one has had a merry Christmas and a pleasant change of year. We certainly have. It is a bit late, but I would like to wish everyone a happy new year. We from the board of Kraket will pro-vide you with a lot of activities to make 2008 a very good year, starting off with a movie night and a karaoke evening. An indoor football tour-nament will take place in February. Important for both econometrists looking for a job and for students that would like to know more about possibilities in the business is the LED, which will be held in Eindhoven this year. We would like to welcome you on these activities, their exact dates are listed below.Kraket wouldn’t be the same if there were no mystery activities, so start guessing where our Kraketweekend and the ActieveLedenDag will take place.

Agenda

7 February Karaoke

21 February LED (Eindhoven)

27 February Futsal sponsored by Mercer

2 April ALD

30 May - 1 June Kraketweekend

During the period of exams, we are tying to-gether the last ends of our year in the board of the VSAE. A fresh new board has been found to take over in February. We have all the confi-dence in them, and are sure that they will give a great follow-up to things accomplished in the past.In November we organized the Financial Econometrics (FinEco) project. Several compa-nies in the energy trading branche presented their respective fields of work during several workshops and a presentation. Three days af-ter FinEco, we left with a group of 48 students for Copenhagen. We visited several companies and the University of Copenhagen. Of course we also explored the nightlife of Copenhagen.In December, the Actuarial congress took place. 200 actuarial students and employees came to the NH Barbizon to listen to presentations and a discussion on Pension Risk Management. The agenda for the coming months is relatively empty. The 5th of February the VSAE will or-ganize the Personal Development day, when 24 students will spend a day in training and dis-covering they strong and weak points. In march the VSAE will celebrate its 45th an-niversary. For this occasion we have planned several activities in the week of 10 to 14 March. Especially worth mentioning are the reception on the 10th of March in Cristofori and the gala on Friday the 14th.

Agenda

5 February Personal Development Day

7 February General Members Meeting

12 February Monthly free Drink

21 February LED (Eindhoven)

10-14 March 45th anniversary VSAE

7-8 April Econometric Game

Facultive