key points of contention in framing assumptions for risk and uncertainty management

11
Key points of contention in framing assumptions for risk and uncertainty management Chris Chapman * University of Southampton, School of Management, Southampton SO17 1BJ, United Kingdom Received 25 November 2005; received in revised form 17 January 2006; accepted 26 January 2006 Abstract This paper explores the relationship between ‘common practice’ as defined by a simple reading of PMBOK Chapter 11 and ‘best prac- tice’ as approached (but not quite achieved) by two alternative guides (PRAM and RAMP) in terms of key points of contention in fram- ing assumptions which everyone interested in project management as a whole ought to understand. An immediate purpose is helping readers to avoid some of the current confusion about the difference between ‘common practice’ and ‘best practice’. A longer term goal is influencing the shape of future project risk management guides, to enhance them individually, and to make them easier to use collec- tively. ‘Best practice’ definition is itself contentious. Other authors are encouraged to debate the definition of ‘best practice’ and explore the position of other guides. The framing assumptions are considered in terms of basic concepts: ‘probability’, ‘uncertainty’, ‘risk’, ‘opti- misation’ and ‘opportunity’. A practical example of the implications is provided via analysis of the use of probability–impact (PI) matri- ces and associated PI indices (risk indices or scores). The use of PI indices is ‘common practice’, but it is a clear indication that ‘best practice’ is not being followed, for reasons clarified in this paper. A follow-on companion paper considers related generic process def- inition issues. Ó 2006 Elsevier Ltd and IPMA. All rights reserved. Keywords: Project risk management; Uncertainty management; Guides; Probability–impact matrices and indices or scores; Risk indices or risk scores 1. Introduction The need for effective and efficient management of risk and uncertainty in projects is not contentious. How best to satisfy this need is highly contentious amongst those involved in contributing to the project risk management lit- erature, especially amongst those involved in producing guides under the auspices of professional bodies and gov- ernment agencies. However, direct debate about points of contention has been limited, and for the most part it has been confined to discussions within groups producing guides. Consequences of this lack of public debate include a sig- nificant gap between ‘best practice’ and ‘common practice’, and considerable confusion about what ‘best practice’ involves. This paper attempts to stimulate debate involving the project management community as a whole about the differences between ‘best practice’ and ‘common practice’ framing assumptions. To clarify the implications in imme- diate practical terms an example is employed – the use of PI (probability–impact) matrices and associated PI indices (risk indices or scores), a ‘common practice’ tool. PI indices are a clear symptom of ‘common practice’ which is not ‘best practice’, a statement some readers may see as contentious. They are used in this paper to illus- trate the practical implications of framing assumptions while avoiding the details of generic processes which require a separate paper. PI indices are central to the risk management chapter in the third edition of the Project Management Body of Knowledge (PMBOK) guide [1, chapter 11], produced by the Project Management Institute (PMI), referred to as PMBOK 2004 in this paper. They are accommodated and provide a significant source of 0263-7863/$30.00 Ó 2006 Elsevier Ltd and IPMA. All rights reserved. doi:10.1016/j.ijproman.2006.01.006 * Tel.: +44 23 80592525; fax: +44 23 80593844. E-mail address: [email protected]. www.elsevier.com/locate/ijproman International Journal of Project Management 24 (2006) 303–313 INTERNATIONAL JOURNAL OF PROJECT MANAGEMENT

Upload: chris-chapman

Post on 21-Jun-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Key points of contention in framing assumptions for risk and uncertainty management

INTERNATIONAL JOURNAL OF

www.elsevier.com/locate/ijproman

International Journal of Project Management 24 (2006) 303–313

PROJECTMANAGEMENT

Key points of contention in framing assumptions for riskand uncertainty management

Chris Chapman *

University of Southampton, School of Management, Southampton SO17 1BJ, United Kingdom

Received 25 November 2005; received in revised form 17 January 2006; accepted 26 January 2006

Abstract

This paper explores the relationship between ‘common practice’ as defined by a simple reading of PMBOK Chapter 11 and ‘best prac-tice’ as approached (but not quite achieved) by two alternative guides (PRAM and RAMP) in terms of key points of contention in fram-ing assumptions which everyone interested in project management as a whole ought to understand. An immediate purpose is helpingreaders to avoid some of the current confusion about the difference between ‘common practice’ and ‘best practice’. A longer term goalis influencing the shape of future project risk management guides, to enhance them individually, and to make them easier to use collec-tively. ‘Best practice’ definition is itself contentious. Other authors are encouraged to debate the definition of ‘best practice’ and explorethe position of other guides. The framing assumptions are considered in terms of basic concepts: ‘probability’, ‘uncertainty’, ‘risk’, ‘opti-misation’ and ‘opportunity’. A practical example of the implications is provided via analysis of the use of probability–impact (PI) matri-ces and associated PI indices (risk indices or scores). The use of PI indices is ‘common practice’, but it is a clear indication that ‘bestpractice’ is not being followed, for reasons clarified in this paper. A follow-on companion paper considers related generic process def-inition issues.� 2006 Elsevier Ltd and IPMA. All rights reserved.

Keywords: Project risk management; Uncertainty management; Guides; Probability–impact matrices and indices or scores; Risk indices or risk scores

1. Introduction

The need for effective and efficient management of riskand uncertainty in projects is not contentious. How bestto satisfy this need is highly contentious amongst thoseinvolved in contributing to the project risk management lit-erature, especially amongst those involved in producingguides under the auspices of professional bodies and gov-ernment agencies. However, direct debate about points ofcontention has been limited, and for the most part it hasbeen confined to discussions within groups producingguides.

Consequences of this lack of public debate include a sig-nificant gap between ‘best practice’ and ‘common practice’,and considerable confusion about what ‘best practice’

0263-7863/$30.00 � 2006 Elsevier Ltd and IPMA. All rights reserved.

doi:10.1016/j.ijproman.2006.01.006

* Tel.: +44 23 80592525; fax: +44 23 80593844.E-mail address: [email protected].

involves. This paper attempts to stimulate debate involvingthe project management community as a whole about thedifferences between ‘best practice’ and ‘common practice’framing assumptions. To clarify the implications in imme-diate practical terms an example is employed – the use of PI(probability–impact) matrices and associated PI indices(risk indices or scores), a ‘common practice’ tool.

PI indices are a clear symptom of ‘common practice’which is not ‘best practice’, a statement some readersmay see as contentious. They are used in this paper to illus-trate the practical implications of framing assumptionswhile avoiding the details of generic processes whichrequire a separate paper. PI indices are central to the riskmanagement chapter in the third edition of the ProjectManagement Body of Knowledge (PMBOK) guide [1,chapter 11], produced by the Project Management Institute(PMI), referred to as PMBOK 2004 in this paper. Theyare accommodated and provide a significant source of

Page 2: Key points of contention in framing assumptions for risk and uncertainty management

304 C. Chapman / International Journal of Project Management 24 (2006) 303–313

confusion in the second edition of the Project Risk Analy-sis and Management (PRAM) Guide [2], produced by theAssociation for Project Management (APM), referred toas PRAM 2004 in this paper. They are also accommodatedand provide a minor source of confusion in the second edi-tion of the Risk Analysis and Management for Projects(RAMP) guide [3], produced by the Institution of CivilEngineers and the Actuarial Profession, referred to asRAMP 2005 in this paper. In all three cases the date isdropped when the meaning is clear.

This paper restricts itself to three guides, which requiresexplanation. PMBOK 2004 was included because it closelyreflects ‘common practice’ as understood by the author.Guidelines produced by professional bodies necessarily seekconsensus. This can lead to a ‘lowest common denominator’syndrome. All guides are practice led to some extent,although their intentions are to lead practice. PMBOK2004 reflects common practice more strongly than PRAM2004 or RAMP 2005. This may be attributable to strongerpressure within the group which produced it to accommo-date common practice. However, the author believes thatPMI global reach and the straightforward nature of thePMBOK guide are important reasons why a simple readingof PMBOK defines common practice. PRAM 2004 andRAMP 2005 were included because they are effective alter-natives which approach ‘best practice’, the remaining gap isof interest, and the author was directly involved in discuss-ing key points of contention during their drafting. Theissues raised are relevant to other guides and the literaturemore generally. Other guides were beyond the scope of asingle paper, but other authors are encouraged to contrib-ute to the debate by extending the discussion to other guidesand other approaches in the broader literature. Theauthor’s view of ‘best practice’ is clearly a legitimate targetas part of this debate. Some readers may wish to debate‘common practice’, but this would be less productive.

The author was responsible for drafting the processchapter in PRAM 1997 [4], and a co-author of the substan-tially revised process chapter in PRAM 2004, as well asmaking more general contributions to both editions, likeall members of both working parties. This paper was stim-ulated in part by the extensive discussions of unresolveddifferences in opinion which took place during the secondworking party’s deliberations. The management of theworking parties on both occasions was very effective, andall contributors were collaborative and constructive in theirresponses to differences in opinion. Contributors providedexpertise based on experience across a wide range of indus-try, from consultant, contractor and client perspectives.PRAM 2004 takes a bold step forward relative to PRAM1997 and PMBOK 2004, while accommodating sustainedand unresolved arguments about what is ‘best practice’ asdistinct from ‘common practice’, and what should be rec-ommended, tolerated, or excluded. Accommodating deeplyheld conflicting views to the extent achieved by the PRAM2004 working party was collaboration in the best possiblespirit. In my view this was not a mistake. It was an impor-

tant step in a process to reach agreement which will cer-tainly take time and may never produce completeconvergence. The views on key points of contentionexpressed in this paper were not shared by all membersof the PRAM 2004 working party, but the process of pro-ducing a consensus was very illuminating, and all membersof the working party deserve credit for the illumination thispaper tries to pass on. This paper is concerned withexplaining what is involved in the areas where consensuswas difficult because a working understanding of all cur-rent guides and the literature more generally requires clar-ity on these points of contention, as does enhancing allfuture guides and common practice.

The author contributed to the development of the pro-cess structure in the 1998 first edition of RAMP 2005,which has not changed, to the editorial processes of botheditions, and to the general discussions of the workingparty. Points of contention were not a significant concern,so this paper does not draw on RAMP to the same extent,but the RAMP discussion complements and extends thePRAM discussion in a useful manner. The RAMP workingparty brought together a comparable range of interests andskills, but it was different, diversity in professional back-grounds being one key difference (actuaries, economistsand engineers), more senior management and board levelexperience being another. Management of the RAMPworking party was comparable to the PRAM working par-ties in terms of its high quality, but different, more directcontrol by the chair/chief editor and the originator of theprocess definition leading to greater internal consistencybeing the key differences. The less contentious nature ofthe discussions meant that collaboration was not testedto the same degree, but it was comparable in quality.

The author has not contributed to PMBOK guides, butpart of the stimulation for this paper and a related earlierpaper [5] was provided by an invitation to give the earlierpaper at a PMI Risk SIG conference in California, andpart of the purpose of this paper is a basis for ongoing dia-logue with PMI Risk SIG members.

Several PMBOK contributing authors provided veryuseful feedback on an earlier draft of this paper, and feed-back from contributing authors of all three guides shapedthe final draft of this paper significantly.

Key points of contention are addressed in this paper interms of framing assumptions. This is not the basis onwhich most discussions about them took place. Such dis-cussions usually focused on process implications. Hind-sight suggests direct discussion of framing assumptionsmight have been a more productive starting place.

In each case the ‘span’ of the framing assumption isdefined on a 5 point scale, from 0 to 4. ‘Span’ reflects gen-erality, range or scope. Point ‘0’ signifies zero span, point‘1’ signifies a minimal level of consideration, and point ‘4’signifies a ‘best practice’ level of consideration, as ‘bestpractice’ is currently understood by the author. Intermedi-ate points were chosen to facilitate discussion. A 5 pointscale provides a good framework for discussion at a useful

Page 3: Key points of contention in framing assumptions for risk and uncertainty management

pro

bab

ility

high

low

low

mediump1

p2

p0 = 0

p3 = 1

i0 = 0

r3 r4 r5

r2 r3 r4

r1 r2 r3

i2 i3medium

i1high

impact

Fig. 1. A basic PI matrix showing PI index values.

C. Chapman / International Journal of Project Management 24 (2006) 303–313 305

level of detail. The definitions of points are unambiguous,and higher values are unambiguously preferable, but allthree guides involve ambiguity in the extent to which theymeet these points. Illuminating and eliminating this ambi-guity is part of the purpose of this paper, not a limitationin the analysis provided. Higher points require a more gen-eral framework for understanding issues, each successivelevel adding to lower levels. The points do not define a‘maturity’ scale – level 4 is directly accessible if the princi-ples involved are accepted. However, some readers will findsome of the higher points contentious, and when this is thecase references provided will have to be pursued to fully testthe views held. ‘Maturity’ scales [6,7] should reflect appro-priate framing assumptions, an issue worth exploration.

The framing assumptions are each allocated a section inthe body of this paper. These sections are ordered to helpclarify dependencies, which are extensive and important.The space allocated to each reflects the relative importanceof clarity, linked to the extent of the contention involved,feedback from the contributing authors of all three guidesproviding crucial guidance in this area. To illustrate someof the practical implications, PI indices are used as an illus-trative example.

A companion paper [8] considers further implicationsfor generic process definition, and addresses how this defi-nition is affected by process drivers, like the stage in theproject life cycle of current interest. It addresses important‘best practice’ and ‘common practice’ comparisons andassociated differences between guides which follow on fromthose considered in this paper.

2. The span of the ‘probability’ concept employed

(0) probabilities are not employed explicitly,(1) probabilities have to be objective,(2) subjective probabilities are sometimes acceptable

‘guestimates’,(3) subjective probabilities are the only practical choice

available,(4) subjective probability based adjustment for the impli-

cations of all assumptions and conditions may beessential.

Part of the initial purpose of the PRAM guide projectwas the provision of simple standard process descriptionsand terminology, to avoid the confusion generated by dif-ferent descriptions of common concepts. However, evenwithin the bounds of very collaborative working groups,this goal has proven elusive. The most fundamental pointof contention for both PRAM working parties seemed tobe the definition of ‘risk’ and ‘uncertainty’ to be adopted.However, underlying this debate was a framing assumptionnot discussed explicitly – the span of the probability con-cept employed.

Assuming any probabilities employed must be ‘objec-tive’ is a position taken by many engineers and ‘hard scien-tists’. They usually have an education which included

axiomatic objective probabilities and frequency basedobjective probabilities, but no formal treatment of ‘subjec-tive’ probabilities. Consider the implications of this posi-tion in terms of a very simple example – you are offereda bet on the outcome of a coin toss. This position meansyou can assume the coin is ‘unbiased’, and the toss is ‘fair’,with an axiomatic 0.5 probability of a head or tail. Or youcan assume the best estimate of the probability of a head isf/n, where f heads have been observed in n ‘fair’ trials. Oryou can use Bayes theorem to combine an axiomatic priorand a frequency based posterior estimate. This ‘probabilityspan = 1’ position might be characterised as a ‘classical’position on probability. It is suitable for some hard sci-ences, but it provides very limited managerial decision tak-ing capability, and in practice this limited capabilityfrequently forces a ‘probability span = 0’ position.

Fig. 1 illustrates a basic PI matrix, which has its roots in aclassical position on probability. Early use of this portrayal,without a calibrated probability scale, can be associated withpicturing the probability and impact of hazards by engineersconcerned with safety analysis half a century ago. Their clas-sical view of probability meant they could not assign risks toboxes and define the p1 or p2 values, but they could defineimpact scale values, and an associated PI index, to providea crude measure of the relative importance of hazards, toguide their search for hazard reduction. The PI index valuesr1 to r5 indicated in Fig. 1 could be given increasing values,like 1–5, or decreasing values.

This classical position on probability can be linked to aclassical position on ‘decision analysis’, adopted by mosttextbooks discussing decision analysis before the mid1960s, some since. ‘Decisions under risk’ involved objec-tively determined probabilities. ‘Decisions under uncer-tainty’ involved no available probabilities. Differentdecision strategies were recommended for these two differ-ent circumstances. Chapter 23 of [9] illustrates this classicalapproach to decision analysis and probabilities, and con-trasts it with the ‘modern’ approach considered below.

A ‘probability span = 2’ position can be associated witha ‘midway’ position on probabilities and decision analysis –subjective probabilities are sometimes acceptable ‘guesti-mates’ which can be used when objective probabilities are

Page 4: Key points of contention in framing assumptions for risk and uncertainty management

306 C. Chapman / International Journal of Project Management 24 (2006) 303–313

not available, but they do not embrace objective probabil-ities – they are an alternative to objective probabilities, andthey are inherently less reliable. Those who hold this viewhave usually been exposed to subjective probabilities interms of basic decision analysis courses, but they may nothave had formal exposure to generalised views of subjectiveprobabilities. Developing the earlier example from thisposition, if a bet on a coin toss is mandatory, and the coinis clearly bent, a subjective prior might involve a guesti-mate of the departure from 0.5 because of the bend, butsuch a guestimate is not really a proper probability.PMBOK 2004 seems to be associated with this position.The key indicator is the central role of PI indices, a toolwhich is not compatible with a ‘probability span = 3’ posi-tion, for reasons explained below. PI matrices like Fig. 1used in the PMBOK ‘‘Qualitative Analysis’’ process canspecify all probability scale values, to avoid ambiguity interms of the probability scale, but assigning a ‘risk’ to abox is a guestimate. Non-linear scales may be used forone or both axes, with a ‘score’ equivalent to Fig. 1 indexdefined by the product of logarithmic scale values, forexample. This obscures the one dimensional nature of therisk measure, as discussed below, but it does not addressthe fundamental problems considered below. ‘‘QuantitativeAnalysis’’ using probabilities may follow, but PMBOKexplicitly suggests such analysis is often unnecessary, andin terms of associated practice it is often omitted or limitedin terms of the span of risk and uncertainty addressedbecause of its PI index basis. This position involves analignment of PMBOK and ‘common practice’.

A ‘probability span = 3’ position can be associated witha ‘modern’ position on probability and decision analysis –subjective probabilities are the only practical choice avail-able, and they should be fully understood and exploited.Further developing the earlier examples from this position,if a bet now on the outcome of ten successive tosses of acoin is mandatory, and the coin is clearly bent, it wouldbe naı̈ve to simply assume an unbiased coin or fair tosseson any trials available, and even if these assumptions aremade, the subjective nature of such judgements rendersthe estimate subjective. The only sensible approach to mak-ing appropriate decisions would include a subjective esti-mate of the expected probability based on an adjustmentfrom 0.5, plus a subjective estimate of the associated likelyerror, which will not cancel out on successive throws. Sub-jective probabilities are not a second best alternative, theyare the only practical choice available, so they must beunderstood, embraced, and used effectively. Subjectiveprobabilities can be grounded on data and classical objec-tive probabilities, and doing so means they are an exten-sion, not an alternative. Indeed, it is essential to embraceobjective probabilities as special cases to reflect the implica-tions of the assumptions used to derive the objective prob-abilities, which may or may not suggest adjustments to anyassociated parameter estimates. Subjective probabilitiesinterpreted in this inclusive manner have a wider span thanobjective probabilities. This wider span includes judgement

about the relevance of the data and the robustness of anyassumptions, integrating the views of all relevant partiesto achieve an internally consistent and coherent view tothe extent this is worth while. Using subjective probabilitiesis driven by the desire to make decisions which are consis-tent with the best use of all expertise available, as well asthe best use of all data available, because on average thisought to be better than using probabilities which are notconsistent in this sense. Subjective probabilities may begiven an axiomatic basis which renders them respectable[10]. They can also be treated as models of different orders,allowing for probability distributions of probability valueswhen this is deemed useful [11], not the case in terms ofmodern decision analysis, but very much the case in termsof a ‘minimalist’ approach to PI matrices [12–14].

A real case study example may help to clarify this posi-tion. In 1976 the author designed a risk management pro-cess for BP International for offshore North Sea projects.It was tried out for the first time on the Magnus project,which subsequently came in on-time and on-cost, despitesome significant surprises. One source of risk when layingpipe was ‘wet buckles’ – the pipe fractures and fills withwater, becoming too heavy for the barge to hold, leadingto major difficulties. The engineers involved in the plan-ning had data on past North Sea buckles, and the numberwhich had occurred divided by the kilometres laid pro-vided an objective estimate of the probability of a buckleper kilometre to be laid. However, in their view a betterestimate of this probability was 50% of the data basedobjective estimate, because of improvements in the equip-ment in use and learning by the pipe laying contractors,with a confidence band which was not formally definedbut of the order ±20%. The 50% adjustment was used.This was clearly a subjective estimate, but it was muchbetter than the data based estimate. It was tempting touse econometric methods to model the trend in the dataobjectively, and estimate a confidence band objectively,because the author had recently completed a PhD inmathematical economics and econometrics. Econometricsis a branch of statistics using a classical perspective butdistinguished by a concern for testing the validity of clas-sical statistical assumptions, and dealing with failures inthese assumptions, via statistical techniques or reformula-tion of the economics aspects of the modelling assump-tions. However, this would have involved subjectivelyassessed assumptions, which in the author’s view wouldnot have been as robust as letting experienced engineersreflect on the joint effect of complex interdependent issuesthey had a reasonable understanding off. The author’sfirst degree was engineering, and ‘engineering judgement’is a highly valued notion, provided conscious and uncon-scious bias is appropriately managed. Objectivity in astrict sense was not an option. The most effective subjec-tive estimate was the only choice available. The most datarich management decision context imaginable does notchange this reality, and assuming otherwise involves amisunderstanding of proper science.

Page 5: Key points of contention in framing assumptions for risk and uncertainty management

C. Chapman / International Journal of Project Management 24 (2006) 303–313 307

A modern view of probability makes PI indices redun-dant and PI matrices in Fig. 1 format unhelpful. Allocatinga risk to any square in Fig. 1 requires interpretation in sub-jective probability terms. Fig. 2 shows two example subjec-tive probability assumptions which could be associatedwith the central box of Fig. 1. Working assumptions fora simple model might involve a uniform probability distri-bution between i1 and i2, with another uniform probabilitydistribution between p1 and p2, mid-point expected valuesin both cases. This might be associated with a presumedreality as shown, avoiding the effort of trying to work withthe more complex reality. The same simple model could beused for all boxes, although the associated presumed realitywould need adjustments to cope with bounds associatedwith axes. One key advantage of this simple subjectiveprobability model interpretation relative to a conventionalPMBOK interpretation is facilitating the use of standardprobability combination procedures to aggregate acrosssources of risk without being forced to use a one dimen-sional PI index. Another key advantage is we no longerneed to force the use of a pre-specified box. We can accom-modate whatever box shape suits each source of uncer-tainty, using Fig. 3, and interpreting each box in Fig. 3via Fig. 2. If we prefer non-linear scales for our axes, theycan be used. A further key advantage is we do not actuallyneed graphs with pre-specified common scales. We can usethe framework provided by Fig. 3, but just ask for ‘plausi-ble minimum’ and ‘plausible maximum’ estimates corre-

pro

bab

ility

presumed reality, although multiple modes may be involved

impact or probability

working assumptions for the model

expected value

Fig. 2. Two example subjective probability distribution assumptionsassociated with both dimensions of the central Fig. 1 box.

impact

0

issue number 2 – very uncertain probability, predictable impact

issue number 1 – uncertain probability and impact

issue number 3 – reliable probability estimate, but very uncertain impact

0

pro

bab

ility

1.0

Fig. 3. A ‘minimalist’ view of PI matrices.

sponding approximately to 10 and 90 percentile valuesfor the required uniform distributions, following best prac-tice by avoiding the use of absolute maximum or minimumvalues when soliciting estimates [15]. As Fig. 3 implies, thisminimalist approach avoids wasting information, captur-ing much more precise information based on the expertiseof the estimator, as well as avoiding forcing guestimatesinto boxes they may not fit. As Figs. 2 and 3 indicate, thisminimalist approach needs no pretence of unwarrantedprecision. If realising the source of risk is certain, onlythe effect being uncertain, a PI approach becomes particu-larly awkward, but the generalised minimalist approachcopes without difficulty. This means we are not restrictedto a narrow event based view of sources of uncertaintyand risk, as considered later, possibly the most importantreason PI matrices are not part of ‘best practice’.

Significant portions of PRAM 2004 can be associatedwith a modern approach to probabilities, including a min-imalist approach, but portions supporting the use of PImatrices are clearly in a midway position, and other por-tions do their best to accommodate both. A clear and con-sistent modern view of probabilities renders basic PIindices redundant and the constrained nature of PI matri-ces unhelpful, although variants of Fig. 3 and alternativeviews of this information can be useful. The extent to whicha midway position is accommodated in PRAM 2004 is con-fusing in the author’s view. It is understandable in the con-text of the range of views on both working parties and theearly decision to limit scope changes for PRAM 2004. Fur-ther, it is understandable in the context of a wish to avoidinflexible or unduly prescriptive guidance – in general thereis more than one way to do most risk management tasks,and PRAM tries to explain the range available. However,it is confusing, and this matters in terms of helping inexpe-rienced users. RAMP 2005 also endorses a modern posi-tion on subjective probabilities, including a minimalistapproach, and it also accommodates a midway positionand PI indices with ‘health warnings’. However, less atten-tion to PI indices reduces the confusion. Both these guidesmuddy the water in terms of their achievement of a ‘prob-ability span = 3’ position by their different degrees ofaccommodation of a PI index approach. Assessing a frac-tional rating for either serves no useful purpose, but under-standing the point of contention involved and the positionof all three guides is useful.

A ‘probability span = 4’ position can be associated witha ‘generalised modern’ position on probability and decisionanalysis – subjective probability based adjustments for theimplications of all assumptions and conditions may beessential, and may need careful development and articulateexplanation, a position consistent with [11], but beyond theconventional modern view of subjective probabilities. Inbrief, the most sophisticated and effective assessment ofrisk and uncertainty available will allow us to estimatethe expected value of a parameter like cost or duration witha clear understanding of what sources of uncertainty havebeen included, what sources of uncertainty are treated as

Page 6: Key points of contention in framing assumptions for risk and uncertainty management

308 C. Chapman / International Journal of Project Management 24 (2006) 303–313

conditions or assumptions, and the nature of the bias asso-ciated with all other assumptions. The author has in mindthe kind of process used for the Magnus project mentionedearlier [13,14], the context which first raised the issue forthe author. If we want to treat this estimate as uncondi-tional and unbiased, we need to consider the possible needfor three adjustments, applying three factors. The first isfor ‘known unknowns’, all sources of uncertainty whichwere identified but treated as conditions or assumptions.For example, ‘no project scope changes’ may have beenassumed, but some scope changes may be known to beinevitable. The second is for ‘unknown unknowns’, all sim-ilar sources of uncertainty which were not identified. Someare usually inevitable. The third is for any other sources ofbias, like optimistic assumptions of independence whencombining probability distributions, or pessimistic assump-tions of perfect positive correlation when combining prob-ability distributions. Perfect or partial positive correlationassumptions can be used deliberately to offset bias associ-ated with known and unknown unknowns, a significantimprovement relative to compounding these two sourcesof bias by assuming independence, and a useful practicalapproach. The net effect of all three can be referred to asa ‘cube factor’, a simplification of the abbreviation kuuub,for known unknowns, unknown unknowns and bias,related to a three dimensional cube shape portrayal. Usu-ally the role of a cube factor is insight, not numerical pre-cision, but sophisticated risk management processes couldbe applied in some circumstances. For a brief elaborationof this cube factor notion see [16]. For a more detailedtreatment see [13,14]. For a discussion of Donald Rums-feld’s famous ‘‘unk-unks’’ quote, traced to the first editionof [14] or secondary citations, see [17].

One simple variant of the cube factor notion is ‘anengineering factor of safety’, traditionally used in a widerange of areas. Another variant, considered in [16], is the‘optimistic bias’ adjustment factor proposed by HMTreasury [18], subsequently adopted by the Departmentfor Transport [19]. Treasury optimistic bias adjustmentfactors are simple ‘objective’ probability estimates ofthe average adjustment needed based on classical statisti-cal analysis of estimates and outturns by industry sector.Their estimation does not reflect the extent or quality ofthe risk and uncertainty management processes in placefor the projects used as data, and guidance on how toadjust the factors to reflect the quality of the risk man-agement process for each project they are applied to isnot provided, fundamental defects from an econometricperspective which can lead to a range of problems.RAMP 2005 considers this Treasury approach in termsof some obvious defects. RAMP also stresses the impor-tance of an ‘assumptions list’, the ‘known unknowns’.Further, an extensive discussion related to Figure 13 ofRAMP links the ‘known unknowns’ and ‘unknownunknown’ concepts. However, a definitive ‘probabilityspan = 4’ position is rejected. RAMP takes the positiona cube factor concept is undesirable, and it is better to

let decision takers consider these issues in other ways,making the notion of 4 in this context an explicit pointof contention. This can be compared to the establishedeconometric position, which limits itself to classical statis-tical tools, leaving economists to make final subjectiveadjustments. However, decision takers have to use a cubefactor implicitly if not explicitly, and the author prefersexplicit advice from those who should understand thelimitations of an analysis, leaving decision takers to makefurther adjustment judgements they deem appropriate.PRAM 2004 addresses the importance of assumptions,but it does not consider ‘optimistic bias’ adjustment fac-tors or a ‘cube factor’ generalisation. Due to an agree-ment to limit the scope of changes plus other issuesdeemed more important there was no discussion of thisissue. With hindsight these issues should have beenconsidered.

In the author’s view PMBOK needs to get beyond itsmidway position on subjective probability, as indicatedby the central role of PI indices, and RAMP and PRAMneed to avoid their tolerance of PI indices. This is notbecause PI matrices cannot be useful. It is because thereare better ways to achieve the same ends, and PI matricesconstrain the vision of what risk management is about inimportant ways considered below. Further, all three guidesneed to explicitly address the cube factor issue in a coherentand holistic manner, which requires a generalised modernview of subjective probability. Until they do so, users ofthese guides and other similar guides need to understandthe differences between them in terms of their position onprobability. An exact numerical rating of the position ofany of the guides considered in this paper is a matter ofopinion which is not worth debate. A summary table offraming assumption point scores for the three guides con-sidered has been suggested. It is not provided to avoidencouraging such debate. However, the nature and theextent of the gap between common and best practice andthe role of guides in defining these positions is a veryimportant area of debate.

3. The span of the ‘uncertainty’ and ‘risk’ concepts employed

(0) uncertainty is ignored,(1) uncertainty about identified events and conditions is

considered,(2) uncertainty about accumulated variability is also

considered,(3) uncertainty about ambiguity is also considered,(4) uncertainty about implicit and framing assumptions

is also considered.

(0) risk is not an explicit operational concept,(1) risk is defined in one dimensional terms, usually

equivalent to an expected outcome,(2) risk is defined in two dimensions with a cumulative

view of quantified variability,

Page 7: Key points of contention in framing assumptions for risk and uncertainty management

C. Chapman / International Journal of Project Management 24 (2006) 303–313 309

(3) risk also includes the implications of ambiguity whichis not quantified,

(4) risk also includes the implications of framing andworking assumptions.

It is useful to distinguish framing assumptions con-cerned with the meaning of ‘uncertainty’ and ‘risk’ as indi-cated above, but discuss them jointly.

In the author’s view, an ‘uncertainty span = 4’ positionrequires a definition along the lines ‘uncertainty is lack ofcertainty in the simple common language sense’, and a ‘riskspan = 4’ position requires a definition along the lines ‘riskis the possibility of departures from expectations whichmatter’. Explaining why, and how this relates to the threeguides and common practice, is the concern of this section.

The classical position on probability implies a distinc-tion between ‘uncertainty’ and ‘risk’ couched in terms ofthe availability of probabilities (or not). Knight [20] is tra-ditionally blamed for this distinction, but a recent paper[21] suggests he was innocent of this charge. A classicalposition on decision analysis is possibly the real culprit.A modern position on probability renders this distinctionirrelevant because whether or not we associate probabilitieswith uncertainty is simply a question of whether or not thisis a useful thing to do. Some economists who adopt a mod-ern view on probability use ‘risk’ and ‘uncertainty’ inter-changeably. The position taken above allows us to focuson uncertainty management first, to define expectations,then move on to risk, to consider departures from expecta-tions if and when they matter. This facilitates a generalisa-tion of a two parameter mean–variance view of riskdiscussed below while ‘keeping it simple’, in practice andin terms of underlying principles.

Deliberately ignoring aspects of uncertainty is endemicto common practice, possibly because most people arenot comfortable with concepts or tools for addressinguncertainty, and they are not prepared to address anythingthey can avoid when they do not have the concepts or toolsto cope. Ward [22] developed a useful outline of a range ofinterpretations of ‘ambiguity’ which clarifies the point 3and 4 positions on uncertainty assumed above, and empha-sises why the aspects of uncertainty which tend to getignored if a PI index focus is adopted are often the mostimportant.

The PMBOK 2004 definition of ‘risk’ (project risk is anuncertain event or condition that, if it occurs, has a positiveor a negative effect on at least one project objective, such astime, cost, span, or quality) implies an ‘uncertaintyspan = 1’ position, although uncertainty is not explicitlydefined or discussed separately. This position seems to bepredicated on a classical or midway position on probabili-ties and decision making, and a commitment to the use ofPI indices together with an analysis of assumptions. It isconsistent with ‘common practice’. However, ‘‘Quantita-tive Analysis’’ in PMBOK 2004 terms requires an ‘uncer-tainty span = 2’ position, contradicting the espouseddefinition of risk. Further, because common practice fol-

lowing PMBOK ‘‘Quantitative Analysis’’ is usually limitedto a ‘top ten’ or ‘top fifty’ set of risks identified in the‘‘Qualitative Analysis’’ without considering the cumulativeeffect of those outside the top set or knock on effects moregenerally, it weakens its ‘uncertainty span = 1’ position. Onthe other hand, the first paragraph on page 240 has an‘uncertainty span = 4 tone, as an example of more generalperspectives involved.

In the author’s view, PMBOK 2004 is predominantly inan ‘uncertainty span = 1’ position, because of the espouseddefinition of risk linked to the central role of PI indices.However, aspects of a 2 and 3 are present, and skilled usersmay approach a 4, so a 1–4 classification might be used.Here, as in terms of some other framing assumptions, thestarting position is simplistic, so later generalisation is con-fusing and ineffective as a guide for the uninitiated. BothPRAM 2004 and RAMP 2005 can be associated with a 2–4 position, depending on how they are interpreted, with afocus close to a 4. The details are not worth exploring here,other than noting that the lack of a fully developed cube fac-tor equivalent precludes a full 4. As in relation to all framingassumptions, debate on exact ratings for current editions ofthese guides would be of limited value, but debate on wherethey should be would be useful, and understanding the liter-ature in the meantime requires attention to what is assumedabout the meaning of ‘uncertainty’.

Early use of PI matrices like Fig. 1 in a hazard contextused a ‘risk = probability · impact’ definition, and this isstill common practice in safety analysis. The PI index valuer3 in all three diagonal boxes as shown, with comparablecommon index values for other diagonal sets of boxes,makes sense in this context. If appropriate probability andimpact scales are used, the expected outcomes associatedwith these sets of boxes are approximately the same.PMBOK 2004 uses a different definition of risk as notedabove – it is broader, accommodating but not requiring arisk = probability · impact definition. However, makingsense of the PMBOK use of risk indices or scores implies asimilar one dimensional interpretation, a ‘span of risk = 1’position. As just noted, ‘‘Quantitative Analysis’’ in PMBOKterms requires a ‘span of uncertainty = 2’ position, but thiscontradicts the espoused definition of risk, a simplistic start-ing position which does not cope with needed complexity.

A ‘span of risk = 2’ position means a one dimensionmeasure of risk cannot be used. This position might beassociated with a two parameter Markowitz ‘mean–vari-ance’ approach [23], initially formulated as an approachto the management of risk in terms of portfolios of securi-ties, which has become a cornerstone of modern econom-ics, leading to a Nobel Prize for Economics forMarkowitz. For any given expected outcome, the variance(a measure of spread) of the associated distribution is a sur-rogate measure of risk, and it is the way expected valuesand related variability combine, bearing in mind the impactof positive and negative dependence, which is crucial. Inoperational terms a useful related alternative is the use ofoverlaid cumulative probability distributions to make

Page 8: Key points of contention in framing assumptions for risk and uncertainty management

expected values indicated by

A B C D

0

1.0

cum

ula

tive

pro

bab

ility

a c

b

0.5

cost

Fig. 4. Overlaid cumulative probability distribution portrayal of choices:simple linear examples.

310 C. Chapman / International Journal of Project Management 24 (2006) 303–313

choices. Fig. 4 provides a simple example of associateddecision rules. It assumes four potential choices all haveuniform probability distributions of cost outcome in den-sity form, linear cumulative forms. Choice ‘A’ involvesthe same expected outcome as ‘B’. However, the near ver-tical nature of the ‘B’ curve means it is relatively risk free,while the flat slope of the ‘A’ curve indicates high variabil-ity, and high risk relative to ‘B’. ‘B’ is the preferred choiceoverall if it is available, because it has less risk and a lowerexpected value than all the other choices. If ‘B’ is not avail-able, then ‘A’ is the best choice. This is because ‘A’ domi-nates the other two, clearly indicated by the fact its curve isentirely to the left of the other two. Despite the fact that ‘A’is more variable than either of the other two, it is less riskybecause of its lower expected value. If both ‘A’ and ‘B’ arenot available, ‘C’ is probably the best choice. This isbecause its expected value is significantly lower. However,additional risk is indicated by the overlap area of thecurves, the triangle a–b–c, so a trade-off between risk andexpected value is involved. In this two dimensional frame-work a one dimensional approach based on expected val-ues ignores the slopes of the lines – ‘A’ and ‘B’ cannot bedistinguished, as illustrated by the use of r3 for all threeboxes on the diagonal of Fig. 1. See [5,13 or 14] for a devel-opment of this approach which considers the complicationsof curved cumulative distribution shapes.

As noted earlier, a ‘span of risk = 4’ position requires alluncertainty be considered, in terms of a definition of theform ‘risk is the possibility of departures from expectationswhich matter’. It is important to note here that this requiresconsidering expectations and possible departures fromthem defined in terms of all sources of uncertainty, whetheror not they are usefully measured.

A ‘span of risk = 3’ definition might be associated withthe same definition of risk in the absence of a cube factorequivalent.

Both PRAM 2004 and RAMP 2005 use variants of this‘span of risk = 3’ definition of ‘risk’. However, PRAM2004 refers to risk defined in this way as ‘project risk’, alsousing a definition of ‘event risk’ which coincides with thePMBOK 2004 definition of ‘project risk’ noted above. Fur-

ther, RAMP 2005 refers to risk defined in this way as ‘over-all risk’, and sometimes uses ‘risk’ without the ‘overall’ inan event risk sense. This is an important example of thecurrent confusion generated by different guides interpretingthe same term in very different ways, or using differentterms when one is feasible, and a legacy of confusion whichimpacts all guides. The PRAM 2004 twin definitionsapproach can be seen as part of the cost of accommodatingthose members of the working party who wanted to con-tinue using PI matrices. Alternatively, it can be seen as partof the heritage of moving from a PRAM 1997 definition ofrisk which was effectively identical to the PMBOK 2004position, bearing in mind this was a major change, and itwas outside the agreed scope of revisions. The RAMP2005 approach can be seen as simply accepting that ‘risk’has a range of definitions, and so long as the context makesclear which is involved, this should not be a problem. In theauthor’s view clarity about what risk management is abouthas to include clarity about what is meant by the word‘risk’. A basic definition like that provided above can besupplemented by clarifications like ‘upside risk’, ‘downsiderisk’, ‘event risk’, and so on. There is no need for pedanticdefinitions that cannot be altered. But a degree of coher-ence across guides is important. Until guides achieve thiscollectively, those interested in reading about the subjectneed to understand the differences in the positionsdescribed above and further differences in other guidesand literature.

4. The span of the ‘optimisation’ concept employed

(0) optimisation in terms of seeking a ‘best’ outcome isnot an explicit issue,

(1) project optimisation in terms of all relevant attributesis an explicit issue,

(2) project ‘risk efficiency’ is also an explicit issue for allrelevant attributes,

(3) process ‘simplicity efficiency’ is an explicit issue,(4) corporate ‘learning efficiency’ is an explicit issue.

The second sentence of PMBOK 2004 is ‘‘The objectivesof Project Risk Management are to increase the probabilityand impact of positive events, and decrease the probabilityand impact of events adverse to the project.’’ The way thisposition is developed when discussing ‘‘Qualitative Analy-sis’’ is clearly consistent with the definition of risk notedearlier, the use of PI indices, and a ‘span of optimisa-tion = 0’ position. Aspects of positions 1–4 are an obviouspart of the ‘‘Quantitative Analysis’’ discussion, but they arenot systematically developed, arguably because the startingposition is simplistic. This position is consistent with com-mon practice. The author is well aware that contributingauthors of PMBOK 2004 have the skills and experienceto go well beyond this, but common practice, often attrib-uted to PMBOK and consistent with a simple reading ofPMBOK, can be characterised by a failure to make optimi-sation an explicit issue.

Page 9: Key points of contention in framing assumptions for risk and uncertainty management

pro

bab

ility

high

low

low medium

mediump1

p2

p0 = 0

p3 = 1

i0 = 0 i1

r3

r1 r2 r3

r4 r5

r2 r3 r4

i2 i3high

impact

Fig. 5. A simplified version of Fig. 1 with the same PI index values in thesame locations but diagonal boundary lines.

C. Chapman / International Journal of Project Management 24 (2006) 303–313 311

All that a ‘span of optimisation = 1’ position requires issystematic attention to trade-offs between attributes, ask-ing questions like ‘if we are prepared to increase theexpected cost by 5%, by how much could we decrease theexpected duration, and would it be worth it?’

A ‘span of optimisation = 2’ position extends this to‘risk efficiency’, minimising the risk associated with theexpected value with an appropriate trade-off between riskand expected outcome for each relevant attribute. ‘Riskefficiency’ in a variance efficiency form is central to aMarkowitz mean–variance approach. Fig. 4 demonstratesa more general form which can cope with any cumulativedistribution shape [5], by implication all higher moments[13]. PRAM 2004 and RAMP 2005 make ‘risk efficiency’an explicit issue for key attributes like cost and time, seek-ing an optimal trade-off between expected outcomes andrisk in the sense associated with Fig. 4, together with anappropriate trade-off between expected values and riskfor each attribute, which leads on to appropriate trade-offsbetween attributes. What is crucial to the current argumentis the need for a definition of risk which involves twodimensions and a cumulative view of quantified variability.What is also crucial in practice is at least a minimalistapproach to quantitative analysis to address the questionof risk efficiency, and the notion that doing so pays foritself directly, by improving expected performance andreducing risk simultaneously [5,13,14]. In practice a firstpass minimalist approach usually suggests areas wheremore refined quantitative analysis would also pay, provid-ing inbuilt sensitivity analysis which helps to achieve ‘sim-plicity efficiency’.

‘Simplicity efficiency’ [13,14,16] involves an explicit con-cern for using models and processes which minimise thecomplexity of the analysis required for any given level ofinsight, and facilitate choosing an appropriate level of com-plexity and insight for any given context. There is no ‘onebest way’ to approach all situations. There is a wide rangeof choices. But an efficient choice is important, a choicewhich minimises the required complexity for any given levelof insight. So is choosing an appropriate level of insight andcomplexity. The ‘‘Focus Risk Management Process’’ sub-phase of the PRAM 2004 process is designed to provide sim-plicity efficiency and the discussion of the process as a wholereflects simplicity efficiency. This concern is less apparent inRAMP 2005, but it does underlie the process. There is evi-dence of a concern for simplicity efficiency in PMBOK2004, but the use of a PI index is a clear indication of processinefficiency. This inefficiency can be considered at severallevels. At the lowest level, if a one dimensional index equat-ing the diagonal boxes in Fig. 1 is the desired output, itwould make more sense to draw the diagonal lines indicatedin Fig. 5 and estimate the same index directly. This wouldprovide the same risk index values without wasted effortor the unhelpful illusion ‘position on the diagonals matters’.For those used to non-linear scales with a ‘score’ defined bythe product, the linear diagonals become non-linear, but theissue is the same. At a higher and more important level, the

time spent in Fig. 1 derivatives and associated guestimationcould be better spent on a very different style of qualitativeanalysis. For example, an effective simple ‘traffic light’ pro-cess is as follows.

Use simple versions of the ‘Identify’, ‘Structure’ and‘Ownership’ phases of the PRAM 2004 process as elabo-rated in [14] to associate each source of uncertainty witha preliminary view of appropriate responses, each sourceof uncertainty and possible responses combination beingreferred to as an ‘issue’. Simplify the issues by groupingthem, without losing sight of key dependencies, to theextent this is possible. Clarify the ownership of ‘issues’, atboard, senior manager, contractor or sub-contractor levelsas appropriate. Then, if an issue has clear and acknowl-edged ownership plus plausible responses, give it a ‘greenlight’. If an issue has clear and acknowledged ownershipbut no plausible responses as yet, give it an ‘amber light’.Otherwise give it a ‘red light’. Focus senior managementattention on the red lights first. When amber lights areaddressed, refine the priority by asking questions like ‘dowe need to identify the issues needing immediate seniormanagement attention first?’or ‘do we need to identifythe issues needing more analysis first?’ In the latter casesimple quantitative analysis to ‘size’ the issues is useful.The former case suggests earlier project management fail-ure with both immediate management implications andlong term corporate learning implications.

It should be clear that if this kind of analysis has notbeen done, guestimation in the format of Figs. 1 or 5 islikely to be relatively ineffectual. Intermediate levels of inef-ficiency include the waste of information and the problemsassociated with a one dimensional measure of risk dis-cussed earlier, plus related defects discussed on page 145of PRAM 2004.

Corporate learning is a key issue [24], and ‘learning effi-ciency’ implies all the lower levels of optimisation need tobe addressed over time in terms of explicitly managing cor-porate learning. For example, if early applications of a pro-ject risk management process are treated as investments incorporate learning as well as assessments of particular pro-jects, erring on the side of more complex processes providesinsight into what forms of complexity pay and which donot. Later applications can reflect this tested approach to

Page 10: Key points of contention in framing assumptions for risk and uncertainty management

312 C. Chapman / International Journal of Project Management 24 (2006) 303–313

simplification, which is not available if simplisticapproaches are used from the outset. This notion is explicitin PRAM 2004, using this example. In some respects it isless evident in RAMP 2005, but in others it is more evident,the role of a whole life cycle perspective being particularlyimportant. It is generally less evident in PMBOK 2004,although skilled consultants who use this basis clearlyunderstand many of the issues.

While ‘optimisation span = 3 and 4’ positions areexplored in some detail in [13,14], their treatment is stillvery limited relative to what might be done. This is an areawhich deserves a lot more attention in the literature as awhole. Moving all guides to a clear 4 is a key opportunity.In the meantime users of all guides need to understandtheir limitations.

5. The span of the ‘opportunity’ concept employed

(0) opportunities are ignored,(1) opportunities are favourable events,(2) opportunities also include cumulative good luck,(3) opportunities include more effective responses to both

positive and negative variability,(4) opportunities include any way of improving

performance.

The PMBOK 2004 definition of risk implies an ‘oppor-tunity span = 1’ position. The use of PI matrices like Fig. 1can be expanded to consider opportunities in this sense assuggested by Hillson [25], incorporated in both PRAM2004 and PMBOK 2004. As noted earlier, ‘‘QuantitativeAnalysis’’ in PMBOK terms requires attention to thecumulative effect of uncertainty, but the more sophisticatedconcerns which ‘‘Quantitative Analysis’’ can address areundermined by a simplistic focus in the ‘‘QualitativeAnalysis’’.

A cumulative view of good luck, three activities insequence accumulating accelerated performance for a vari-ety of reasons for example, is beyond the span of uncertaintyand risk concepts limited to events and conditions. Risk anduncertainty concepts have to embrace cumulative variabil-ity, clearly the PRAM 2004 and RAMP 2005 position.

The notion that responses to uncertainty are central to the‘opportunity’ concept is an important further generalisation.For example, to accumulate good luck in three activities insequence performed by different sub-contractors, contractswould have to ensure the following activity sub-contractorsare motivated to respond to early completion by prior activ-ity sub-contractors, to avoid ‘wasting good luck’ [13].

Further generalisation can include indirect moral boost-ing notions, like ‘searching for opportunities’ is what riskmanagement is about, and ‘if risk management is donewell, it should be fun’ [14].

PRAM 2004 and RAMP 2005 are both 4s in theseterms, although the emphasis of the point 3 and 4 issuesneeds further development in these guides and the litera-ture as a whole. PMBOK 2004 touches on point 3 and 4

issues, but the implicit revision in the initial definition of‘risk’ needed to deal with ‘‘Quantitative Analysis’’ is con-fusing, and it helps to sustain common practice which doesnot fully exploit the opportunity management aspect ofuncertainty management.

6. Conclusion

Most organisations which make extensive use of riskand uncertainty management processes need to developtheir own processes at some stage, usefully viewed as a syn-thesis of successive ‘‘Focus Risk Management Process’’sub-phases if they start with a PRAM 2004 framework aspart of their perspective. Developers of these processesand people involved in using any particular process shouldview them in the context of what is widely understood –our collective experience or knowledge. This applies tousers who provide input assessments and users who inter-pret the outputs. Users of processes do not need extensiveprocess design expertise, but they do need to understandwhy using any process at any level in a ‘paint by numbers’manner is ineffective. Features and insights provided by arange of guides should be considered as part of this knowl-edge. Many valuable guides not considered in this paper,like [26], offer strengths which can be usefully included,provided any conflicting basic definitions and assumptionsare understood and worked around. One immediate pur-pose of this paper is clarifying how all readers of this jour-nal can best approach the literature on project riskmanagement, guides produced by professional bodies andgovernment agencies in particular.

Other literature which uses some different definitionsand terms but underlies the position taken here, like [27],is clearly relevant. So is literature which is supportive ofany relevant perspectives, like [28], and literature whichoffers alternative perspectives, like [29]. Books like [25],which offer useful additional perspectives in some areasbut employ some framing assumptions which contradictone or more of the positions taken here, are more difficultto synthesise unless the implications of these framingassumptions are understood, but they offer importantinsights. A basic purpose of this paper is making it easierfor all readers of this journal to cope with a literature whichinvolves different framing assumptions.

It should be clear there are serious limitations in manyof the framing assumptions of PMBOK 2004 from theauthor’s perspective. In particular, the span of the proba-bility concept used limits the span of the uncertainty andrisk concepts, which in turn limits the span of the optimi-sation and opportunity concepts. The use of PI indices iscentral to the PMBOK process and these span issues.The follow-on paper [6] will develop the implications interms of generic process definition. The author is very con-cerned about making sure the potential users of all guidesunderstand why common practice based on a simple read-ing of PMBOK is not best practice. The author is also veryconcerned to make it clear that this does not imply that

Page 11: Key points of contention in framing assumptions for risk and uncertainty management

C. Chapman / International Journal of Project Management 24 (2006) 303–313 313

contributing authors of the PMBOK guide follow it in asimplistic way in their own practice, and other experts can-not interpret it in more sophisticated terms. The basic dif-ficulty with PMBOK as the author sees it is a simplisticstarting position which does not facilitate clearly under-stood additional complexity when that complexity is use-ful. It is not ‘simplicity efficient’ in conceptual terms.

It is perhaps less obvious but just as important to note thatthe author is concerned that PMBOK 2004 may remain thede facto definition of common practice because it is easy tofollow for those who are untroubled by its limitations unlessmore sophisticated guides are made as easy to understand aspossible by all those concerned with producing guides fortheir users. For example, trying to accommodate the use ofPI indices in guides like PRAM 2004 or RAMP 2005 whichprovide much better tools is unhelpful in the author’s view. Itinvolves another kind of ‘simplicity inefficiency’.

But the author’s more general concern is the evolutionof all guides towards consistent support at a generic levelfor the users of these guides. Other guides can be assessedin similar terms using the approach to framing assumptionsoutlined here. In the long term this should help all guides toapproach easily understood ‘best practice’. At present userscan be badly served by guides and a broader literaturewhich is confusing because it uses very basic words like‘risk’ in very different ways, and it assumes very differentobjectives are at stake, with very little discussion in the lit-erature about the implications of these differences. Manygood guides are available, each with particular strengths.To some extent they make different framing assumptionsbecause they are designed for different contexts. Makingdifferent framing assumptions is not a problem in itself –it is the lack of general awareness of the implications whichis the problem. The author hopes this paper generates someprogress towards the resolution of these concerns.

Acknowledgements

The author is grateful to all members of the PRAM andRAMP working parties who contributed to the insightsabout the points of contention which initiated this paper.Stephen Ward provided comments on an early draft whichled to the separation of [6] and significant development.Contributing authors for the PRAM, RAMP and PMBOKguides provided very useful comments on earlier drafts:Terry Williams, Mike Nichols, Chris Lewin, Stephen Grey,Steve Simister, David Hillson, Martin Hopkinson, Crispin(Kik) Piney, David Tilston, David Hulett and Paul Close,in the order comments were first received, in some casesan ongoing dialogue proving particularly helpful. Twoanonymous referees also provided very helpful suggestions.

References

[1] PMI (Project Management Institute). A guide to the projectmanagement book of knowledge: PMBOK (project managementbook of knowledge) guide. 3rd ed. Upper Darby, PA: PMI; 2004.

[2] APM (Association for Project Management). Project risk analysisand management (PRAM) guide. 2nd ed. High Wycombe: APMPublishing; 2004.

[3] Institution of Civil Engineers and the Faculty and Institute ofActuaries. RAMP Risk Analysis and Management for Projects, 2nded. London: Thomas Telford; 2005.

[4] Simon P, Hillson D, Newland K, editors. APM (Association forProject Management) project risk analysis and management (PRAM)guide. Norwich: APM Group Limited; 1997.

[5] Chapman CB, Ward SC. Why risk efficiency is a key aspect of bestpractice projects. Int J Project Manage 2004;22:619–32.

[6] Hillson DA. Towards a risk maturity model. Int J Project BusinessRisk Manage 1997;Spring 1(1):35–45.

[7] Hopkinson M. Maturity models in practice. Risk Manage Bull2002;5(4).

[8] Chapman CB. Key points of contention in generic process descrip-tions for project risk and uncertainty management. Int J ProjectManage (submitted for publication).

[9] Chapman CB, Cooper DF, Page MJ. Management for engineers.Chichester: Wiley; 1987.

[10] Pratt JW, Raiffa H, Schlaifer R. The foundations of decisions underuncertainty: an elementary exposition. J Am Stat Assoc1964;59:353–75.

[11] Raiffa H. Decision analysis: introductory lectures on choices underuncertainty. Reading, MA: Addison-Wesley; 1968.

[12] Chapman CB, Ward SC. Estimation and evaluation of uncertainty: aminimalist first pass approach. Int J Project Manage 2000;18:369–83.

[13] Chapman CB, Ward SC. Managing project risk and uncertainty: aconstructively simple approach to decision making. Chichester:Wiley; 2002.

[14] Chapman CB, Ward SC. Project risk management: processes,techniques and insights. 2nd ed. Chichester: Wiley; 2003.

[15] Moder JJ, Phiips CR. Project management with CPM and PERT.New York: Van Nostrand; 1970.

[16] Chapman CB, Ward SC, Harwood I. Minimising the effects ofdysfunctional corporate culture in estimation and evaluation pro-cesses: a constructively simple approach. Int J Project Manage2006;24:106–15.

[17] Dreck MH. <http://www.janegalt.net/blog/archives/004640.html>[accessed 29.06.05].

[18] Treasury HM. The green book, appraisal and evaluation in centralgovernment. 1 Horse Guards Road, SW1A 2HQ, London: HMTreasury; 2003.

[19] Flyvberg B. In association with COWI. The British Department forTransport, Procedures for dealing with optimism bias in transportplanning, Guidance Document. Report 58924, Issue 1, Departmentfor Transport, London, 2004.

[20] Knight FH. Risk, uncertainty and profit. Boston: Houghton Mifflin;1921.

[21] LeRoy SF, Singell Jr LD. Knight on risk and uncertainty. J PoliticalEconomy 1987;95:394–406.

[22] Ward SC, Chapman CB. Transforming project risk management intoproject uncertainty management. Int J Project Manage2003;21:95–105.

[23] Markowitz H. Portfolio selection: efficient diversification of invest-ments. New York: Wiley; 1959.

[24] Senge PM. The fifth discipline: the art and practice of the learningorganisation. New York: Doubleday; 1990.

[25] Hillson D. Effective opportunity management for projects: exploitingpositive risk. New York: Marcel Dekker; 2003.

[26] Australian/New Zealand Standard AS/NZS 4360:2004. Risk Man-agement. Homebush NSW 2140: Standards Australia and Wellington6001: Standards New Zealand, 2004.

[27] Cooper DF, Chapman CB. Risk analysis for large projects. Chich-ester: Wiley; 1987.

[28] Morris PWG, Hough GH. The anatomy of major projects. Chich-ester: Wiley; 1987.

[29] Williams T. Modelling complex projects. Chichester: Wiley; 2002.