minimising the effects of dysfunctional corporate culture in estimation and evaluation processes: a...

10
Minimising the effects of dysfunctional corporate culture in estimation and evaluation processes: A constructively simple approach Chris Chapman * , Stephen Ward, Ian Harwood School of Management, University of Southampton, Southampton SO17 1BJ, UK Received 10 June 2005; accepted 5 August 2005 Abstract This paper explores connections between subjective judgements about uncertainty and corporate culture which are relevant to everyone interested in estimating project parameters or interpreting estimates prepared by others. The basis of the discussion is a simple example, drawn from an actual case. It involves estimating the uncertain duration of a project activity in an organisation with two common cultural conditions: a Ôconspiracy of optimismÕ, and Ôirrational objectivityÕ. After considering some conventional approaches, the paper goes on to suggest a Ôconstructively simpleÕ approach to estimation which is responsive to the emerging anal- ysis and which also incorporates end-user adjustments to counter culturally driven uncertainties and bias. Ó 2005 Elsevier Ltd and IPMA. All rights reserved. Keywords: Culture; Estimates; Models; Probabilities; Uncertainty; Project risk management; Optimistic bias adjustments; Subjectivity and objectivity 1. Introduction The estimation and evaluation of uncertain parame- ters is a core aspect of most project management pro- cesses. It is therefore important to understand the factors that influence parameter estimation and in par- ticular, the extent to which bias may be involved. These factors are framed by the ideologies, beliefs and deep-set values within an organisation [1] and they will impact on how the shared attitudes to risk and uncertainty affect the estimation process. Fig. 1 places estimation and evaluation activities within a general decision support process [2]. For pres- ent purposes this is a more useful framework than any particular conventional project risk management pro- cess, but the ideas developed in this paper can be embed- ded in any specific risk management process (PRAM [3], PMBOK [4], RAMP [5], MoR [6] or AS/NZS 4360 [7], for example), and used to interpret simple deterministic estimation processes. As Fig. 1 indicates, a decision support process and embedded estimation and evaluation processes can be highly iterative. This is because uncertainty about how to proceed has to be progressively resolved by using sim- ple working assumptions for early passes which are re- fined as necessary in later passes. The decision making process illustrated in Fig. 1 rec- ognises that estimating expected values and associated variability for parameters cannot be decoupled from: (1) understanding the context, (2) choosing a specific process for the analysis, (3) specifying the model struc- ture, and (4) evaluating and interpreting the conse- quences of associated uncertainty. An holistic view of uncertainty is a key aspect of the present transformation 0263-7863/$30.00 Ó 2005 Elsevier Ltd and IPMA. All rights reserved. doi:10.1016/j.ijproman.2005.08.004 * Corresponding author. Tel.: +44 23 8059 2525; fax: +44 23 8059 3844. E-mail address: [email protected] (C. Chapman). www.elsevier.com/locate/ijproman International Journal of Project Management 24 (2006) 106–115 INTERNATIONAL JOURNAL OF PROJECT MANAGEMENT

Upload: chris-chapman

Post on 21-Jun-2016

215 views

Category:

Documents


0 download

TRANSCRIPT

INTERNATIONAL JOURNAL OF

www.elsevier.com/locate/ijproman

International Journal of Project Management 24 (2006) 106–115

PROJECTMANAGEMENT

Minimising the effects of dysfunctional corporate culturein estimation and evaluation processes: A constructively

simple approach

Chris Chapman *, Stephen Ward, Ian Harwood

School of Management, University of Southampton, Southampton SO17 1BJ, UK

Received 10 June 2005; accepted 5 August 2005

Abstract

This paper explores connections between subjective judgements about uncertainty and corporate culture which are relevant toeveryone interested in estimating project parameters or interpreting estimates prepared by others. The basis of the discussion is asimple example, drawn from an actual case. It involves estimating the uncertain duration of a project activity in an organisationwith two common cultural conditions: a �conspiracy of optimism�, and �irrational objectivity�. After considering some conventionalapproaches, the paper goes on to suggest a �constructively simple� approach to estimation which is responsive to the emerging anal-ysis and which also incorporates end-user adjustments to counter culturally driven uncertainties and bias.� 2005 Elsevier Ltd and IPMA. All rights reserved.

Keywords: Culture; Estimates; Models; Probabilities; Uncertainty; Project risk management; Optimistic bias adjustments; Subjectivity andobjectivity

1. Introduction

The estimation and evaluation of uncertain parame-ters is a core aspect of most project management pro-cesses. It is therefore important to understand thefactors that influence parameter estimation and in par-ticular, the extent to which bias may be involved. Thesefactors are framed by the ideologies, beliefs and deep-setvalues within an organisation [1] and they will impact onhow the shared attitudes to risk and uncertainty affectthe estimation process.

Fig. 1 places estimation and evaluation activitieswithin a general decision support process [2]. For pres-ent purposes this is a more useful framework than anyparticular conventional project risk management pro-

0263-7863/$30.00 � 2005 Elsevier Ltd and IPMA. All rights reserved.

doi:10.1016/j.ijproman.2005.08.004

* Corresponding author. Tel.: +44 23 8059 2525; fax: +44 23 80593844.

E-mail address: [email protected] (C. Chapman).

cess, but the ideas developed in this paper can be embed-ded in any specific risk management process (PRAM [3],PMBOK [4], RAMP [5], MoR [6] or AS/NZS 4360 [7],for example), and used to interpret simple deterministicestimation processes.

As Fig. 1 indicates, a decision support process andembedded estimation and evaluation processes can behighly iterative. This is because uncertainty about howto proceed has to be progressively resolved by using sim-ple working assumptions for early passes which are re-fined as necessary in later passes.

The decision making process illustrated in Fig. 1 rec-ognises that estimating expected values and associatedvariability for parameters cannot be decoupled from:(1) understanding the context, (2) choosing a specificprocess for the analysis, (3) specifying the model struc-ture, and (4) evaluating and interpreting the conse-quences of associated uncertainty. An holistic view ofuncertainty is a key aspect of the present transformation

Capture the context and focus the process

Model the core issues

Estimate the key parameters

Evaluate the implications

Return to the current decision stage

From the current decision stage

Test results rigorously and interpret them creatively

Implement the results

Fig. 1. Estimation and evaluation in a general decision support process.

C. Chapman et al. / International Journal of Project Management 24 (2006) 106–115 107

of project risk management into project uncertaintymanagement [8]. Uncertainty management must em-brace ambiguity as well as variability. Ambiguity isassociated with lack of clarity because of a lack of data,lack of detail, lack of structure to consider the issues,lack of clarity about the nature and variety of assump-tions employed, sources of bias, and ignorance abouthow much effort it is worth expending to clarify the sit-uation. Failure to recognise this can lead to estimationprocesses which are irrational as well as ineffective andinefficient.

This weakness is sometimes reinforced by a �hard sci-ence� view of the desirability of rigorous theory and�objective� data. In principle, we would like all estimatesto be entirely objective, based on unambiguous interpre-tation of appropriate data employed in theoretically cor-rect models. However, in practice there is no such thingas a completely objective estimate of any probability dis-tribution model which is suitable for rational decisionmaking. Assumptions are almost always involved in esti-mation and evaluation processes, even when lots of rel-evant data are available. Any assumptions which are notstrictly true make associated parameter estimates andoverall evaluations subjective. Particularly importantare the fundamental �framing� assumptions which influ-ence the choice of model. There is virtually never �onebest theoretical model�, as model choices are themselvessubjective.

If we wish to make decisions which are consistentwith our beliefs, we must use subjective estimates ofparameters for subjectively selected models. This meansour decisions will be non-optimal to the extent that ourbeliefs are misguided about model parameters, and tothe extent that underlying assumptions about modelsare inappropriate. However, assuming our beliefs havesome rational basis, if we make decisions which are con-sistent with our beliefs, the chances of optimal decisions

will be much higher. This is �rational subjectivity�, nowwidely understood and subscribed to in terms of subjec-tive probabilities and associated decision tree models,and the basis of most modern decision analysis text-books since the period of Raiffa�s classic �Decision

Analysis: Introductory Lectures on Choices Under Uncer-

tainty� [9]. Given that objectivity is not feasible, it shouldnot be an issue. What is always an issue is the rationalityof estimates, in Simon�s sense of �procedural rationality�[10]. Subjective estimates that are rational are what isneeded, and �irrational objectivity� has to be avoided.Unfortunately, irrational objectivity and a reluctanceto recognise the significance of ambiguities areoften encouraged by inappropriate organisationalcultures.

Failure to recognise the significance of ambiguity isalso reinforced by a reluctance to take subjective proba-bilities to their logical conclusion and adopt a pragmaticframework which emphasises the importance of being�approximately right� in terms of a broad view of theright question. Being �precisely wrong� in the sense ofhaving a precisely correct answer to the wrong questionis a standing joke, but there are clear cultural pressureswithin organisations driving many people in this direc-tion. These cultural pressures, including a �hard science�view of objective data and models, need to be recognisedand managed if the effectiveness and efficiency of estima-tion and evaluation processes are to be improved. The�constructively simple� approach is designed to neutralisethese pressures in the context of a direct focus on pro-cess effectiveness and efficiency.

The origins of a �constructively simple� approach todecision making lie in mainstream operational research[11]. Its recent development in terms of estimation[12,13] and more generally [14] has a project risk man-agement focus, but it is a general approach, relevantto all management practice. This paper will now use a

108 C. Chapman et al. / International Journal of Project Management 24 (2006) 106–115

case-based example to illustrate the constructively sim-ple approach to estimating project activity duration.However, the ideas apply to all other project manage-ment performance measures of interest, including costand �quality� in various guises.

2. An example context: estimating the duration of an

activity

Suppose the following. A project manager needs toestimate how long it will take to obtain corporate officeapproval for a potential design change with expenditureimplications. The project manager is aware that there isa �corporate standard time� for such approvals of3 weeks. The corporate culture places a high value onindividualistic professional competence, objectivity inestimating and analysis, and that there is a strong �cando� ethos in line with Deal and Kennedy�s �tough-guy,macho� cultural type [15]. These cultural characteristicsare strong enough and unbalanced enough to result in�irrational objectivity� (�objectivity� is more importantthan �rationality�) and a �conspiracy of optimism� (it is�bad form� to reveal bad news, to seem to lack confi-dence, or to otherwise appear pessimistic).

The project manager has recorded how long similarapprovals have taken on previous projects, as shownin Table 1.

Table 1 outcomes suggest that 3 weeks is not a sensi-ble estimate of what is likely to happen, on average, andthe standard time of 3 weeks ought to be interpreted as a�target�, something sensible to aim for given no prob-lems. If data about approval durations were not avail-able, the project manager might just use a 3 weekduration estimate. However, this would not be rationalif such data were available to the organisation as awhole. It is never rational to use �objective� numberswhich are known to be inconsistent with reasonableexpectations.

Given this scenario, how should the project managerproceed?

3. Some conventional approaches to estimation

Abasic �objective probability� approach would assume(3 + 7 + 6 + 4 + 15)/5 = 7 weeks is the best estimate of

Table 1Example data for design change approval

Project Duration of approval (weeks)

A 3B 7C 6D 4E 15

the expected duration [16], the simple average (mean) ofall the observations in Table 1.

Some people with a classical statistical training mightassociate this mean with a Normal (Gaussian) distribu-tion and estimate an associated spread (confidencebound) via the variance [16]. The 15 week observationclearly makes this dubious, which may lead some to re-ject the 15 weeks as an outlier, especially if it involvedvery unusual circumstances which the project manageris confident will not occur this time. Presuming that thisis legitimate is dubious, but it leads to a mean of(3 + 7 + 6 + 4)/4 = 5 weeks associated with a probabil-ity distribution which looks Normal, and mean and var-iance estimates look objectively determined in the sensethat the observation of 15 weeks can be rejected as anoutlier relative to the �normal� variability reflected bythe other four observations using classical statisticaltests.

Some seasoned users of PERT [17] might prefer tostart by assuming a standard Beta distribution approxi-mation. Then the mean value is approximated by(b + 4m + h)/6, where b is a plausible pessimistic value,m the most likely value, and h a plausible optimistic va-lue. Table 1 raises the obvious problem �what is the mostlikely value?�. If the value 6 is assumed, b is estimated by3, and h by 15, the estimate of mean duration is(3 + 4 · 6 + 15)/6 = 7 weeks. The corresponding stan-dard deviation estimate is (h � b)/6 = (15 � 3)/6 = 2.This Beta distribution approximation accommodatesthe obvious skew in the data and it makes use of allthe data. It is somewhat rough and ready, and it doesnot encourage examination of the pessimistic tail, butit is clearly more robust than a Normal distribution ap-proach, with or without outlier rejection.

Others might treat the (3 + 7 + 6 + 4)/4 = 5 weekestimate of the mean and an associated variance as Nor-mally-distributed uncertainty related to �normal vari-ability�, which occurs approximately four times out offive, while associating the 15 week observation with�abnormal variability�, which occurs approximately onetime out of five. This perspective would recognise thatnormal variability may be important, but that abnormalvariability may be much more important, and it may

need to be understood, incorporated in estimates, andmanaged if it is cost effective to do so. This immediatelyraises the need to model abnormal variability (in subjec-tive terms), if it matters, in a manner which facilitates itsmanagement.

Some people may take the view that the project man-ager�s best course of action is to assume that approvalfor the design change will take 3 weeks, since this isthe corporate standard time for approvals. The valueprovided by this estimate only makes sense if it is inter-preted as conditional on an assumption that the risk ofexceeding 3 weeks will belong to the corporate office,not the project manager. Given that such an assumption

C. Chapman et al. / International Journal of Project Management 24 (2006) 106–115 109

is unrealistic in most organisations, this approach toestimation might be regarded as a �conditional estimatecop-out�. Such estimating is wide-spread practice in awide variety of contexts because assumed conditionsassociated with an estimate should be clearly docu-mented and considered when evaluating the estimate,but they are often hidden or obscured and usually sub-sequently forgotten. Conditional estimate cop-outs areparticularly dangerous practice when the assumed con-ditions for the estimate are ambiguous, and allocationof responsibility for the conditions is unclear. Such prac-tice is likely to flourish in organisations whose cultureincludes aspects of a �conspiracy of optimism�. In suchorganisations, the �conditional estimate cop-out� is a use-ful defensive mechanism, but one which can collude withand encourage a conspiracy of optimism of serious pro-portions. Estimates based on convenient but unrealisticconditions may appear rational and objective, but theyare actually irrational, because they do not reflect esti-mators� rationally held beliefs about the true state ofaffairs.

Such considerations mean that a �rational subjective�approach to estimating is essential if the quality of esti-mation processes is to be improved. One priority issue isstamping out conditional estimate cop-outs and pickingup the related effects. Another priority issue is to deter-mine whether related uncertainty matters. If it matters,it needs to receive further attention proportionate tohow much it matters and the extent to which it can bemanaged given the available risk management resources.This implies a rationally subjective �constructively sim-ple� approach to estimating which is iterative; startingout with a perspective that is transparent and simple,but which goes into more detail in later passes to the ex-tent that this is useful.

A �rational� approach requires a multiple-pass itera-tive approach, because we do not know if uncertaintymatters when we start, or if it matters, where it matters.The first pass of such a process needs to be as simple aspossible, and demonstrably crude to remind everyonethat simplicity has been maximised. Further, the firstpass needs to be conservative (pessimistic) because weknow people are usually too optimistic when estimatingvariability [18]. The residual sources of variability whichare not dismissed on a first pass may or may not matter,and more effort may be needed to clarify what is in-volved in a second pass analysis or beyond.

Table 2Estimating the duration of design change approval – first pass

Optimistic estimatePessimistic estimateExpected value

Working assumptions: The data comes from a uniform probability distributiovalues.

4. A constructively simple approach to the evaluation and

interpretation of estimates

Consider the example first pass estimate outlined inTable 2.

The key working assumption here is a uniform distri-bution, which is deliberately pessimistic with respect tothe expected value estimate, and deliberately pessimisticand crude with respect to potential variability. Fig. 2shows the simplicity of the model which results fromthese working assumptions, relative to a plausible viewof the underlying reality.

If both the 9 weeks expected value and the plus orminus 6 weeks plausible variation are not a problem inthe context of planning the project as a whole, then nofurther estimating effort is necessary and the first passestimate is �fit for purpose�. However, if either the ex-pected value or the variation is a potential problem, fur-ther analysis to refine the estimates will be required.Assume that the 9 weeks expected duration is a potentialproblem and that a 15 week outcome would be a signif-icant problem for the project manager.

A second pass at estimating the time taken to obtainapproval for a design change might start by questioninga possible trend associated with the 15 weeks observa-tion. More generally it might explore the underlyingcausality of �normal� and �abnormal� variability. Inbroad terms this might involve looking at the reasonsfor variability within what �normally happens�, develop-ing an understanding of reasons for possible outliersfrom what �normally happens�, and developing anunderstanding of what defines �abnormal events�. Itmight be recognised that the reason for the previouslyobserved 15 weeks outcome was a critical review of theproject as a whole at the time approval was sought forthe design change. However, similar lengthy delaysmight be associated with a number of other identifiedreasons for �abnormal� variation, such as: bad timingin relation to extended leave taken by key approvalsstaff, perhaps due to illness; serious defects in the pro-ject�s management or approval request; and generalfunding reviews. It might be observed that the 7, 6 and4 week observations are all �normal� variations, associ-ated with, for example, pressure on staff from otherprojects, or routine shortcomings in the approval re-quests involving a need for further information. The3 weeks standard, achieved once, might have involved

3 weeks (lowest observed value, a plausible minimum)15 weeks (highest observed value, a plausible maximum)9 weeks (central value (3 + 15)/2)

n, 3 and 15 corresponding very approximately to 10 and 90 percentile

350

presumed reality (although multiple modes may be involved)

working assumptions for the model

expected value0 5 10 15 20 25 30

prob

abili

ty

weeks

Fig. 2. First pass model working assumptions and presumed reality.

110 C. Chapman et al. / International Journal of Project Management 24 (2006) 106–115

no problems of any kind, a situation which occurredonce in five observations.

These second pass deliberations might lead to thespecification of a model of design approval duration ofthe form outlined in Table 3.

This particular example involves subjective estimatesrelated to both the duration of an �abnormal situation�and the probability that an abnormal situation is in-volved, in the latter case using the range 0.1–0.5 withan expected value of 0.3. The one observation of an�abnormal situation� in Table 1 suggests a probabilityof 0.2 (a one in five chance), but a rational response toonly one observation requires a degree of conservatismif the outcome may be a decision to accept this potentialvariability and take the analysis no further. Given thelimited data about a �normal situation�, which may notbe representative, even the �normal situation� estimatesof 3 weeks to 7 weeks with an expected value of 5 weeksare best viewed as plausible subjective estimates, in amanner consistent with the first pass treatment. Even

Table 3Estimating the duration of design change approval – second pass

Normal situation

Optimistic estimate 3 weeksPessimistic estimate 7 weeksExpected value 5 weeks

Abnormal situation

Optimistic estimate 10 weeksPessimistic estimate 20 weeksExpected value 15 weeks

Probability that an abnormal situation is involved

Optimistic estimate 0.1Pessimistic estimate 0.5Expected value 0.3

Combined view

Optimistic estimate 3 weeksPessimistic estimate 20 weeksExpected value 8 weeks

Working assumptions: The �normal� data comes from a uniform probabilitypercentile values. The �abnormal� data comes from uniform probability distriboth correspond very approximately to 10 and 90 percentile values, defined suan observed 1 in 5 chance (probability 0.2) of an observed 15 week outcome

if no data were available, Table 3 approach would stillbe a sound rational subjective approach if the numbersseemed sensible in the context of a project team brain-storm of relevant experience and changes incircumstances.

It is worth noting that project staff may tend to focuson reasons for delay attributable to approvals staff,while approvals staff will understandably take a differ-ent, perhaps opposite view. Everyone is naturally in-clined to look for reasons for variability which do notreflect badly on themselves. Assumptions about howwell (or badly) this particular project will manage itsapprovals request is an issue which should significantlyeffect the estimates, whether or not data is available,and who is preparing the estimates will inevitably colourtheir nature [19,20].

The example second pass estimation produces an8 week expected value which is less than the 9 week ex-pected value from the first pass. The plus or minus6 weeks crude 10–90 percentile value associated withthe first pass remains plausible, but the distributionshape is considerably refined by the second pass esti-mate, moving towards the plausible underlying realityas illustrated in Fig. 3.

Fig. 3 still has a transparent and crude simplicity, butit provides deeper insight in practical terms than a moresophisticated smooth curve approximating the pre-sumed reality, because the two scenarios can be associ-ated with underlying sources of uncertainty in a waywhich helps to clarify what is involved and how it canbe managed.

The level of insight provided by this second passmodel might prove enough, but it might not. Furtherpasses might now be required, to explore the �abnormal�

Lowest observed value, plausible minimumHighest observed values, plausible maximumCentral value (3 + 7)/2

Plausible minimum given observed 15Plausible maximum given observed 15(10 + 20)/2, equal to observed 15 by design

Plausible minimum given observed 0.2Plausible maximum given observed 0.2(0.1 + 0.5)/2, greater than 0.2 by design

Normal minimumAbnormal maximum(5 · (1 � 0.3) + 15 · 0.3)

distribution, 3 and 7 corresponding very approximately to 10 and 90butions. Probabilities of 0.1 and 0.5, and durations of 10 and 20 weeks,bjectively (based on unquantified experience), in this case in relation to, a sample of one.

�Knownunknowns�

explicit assumptions or conditions which,if not valid, could have uncertain,significant consequences;

�Unknownunknowns�

implicit assumptions or conditions which,if not valid,could have uncertain, significantconsequences;

�Bias� systematic estimation errors which havesignificant consequences beyond thosenoted above.

weeks35

0

presumed reality (although multiple modes may be involved)

expected value0 5 10 15 20 25 30

working assumptions for the model, plotted assuming 0.3 and 0.7probabilities exactly for the ‘normal’and ‘abnormal’ scenarios

prob

abili

ty

Fig. 3. Second pass model working assumptions and presumed reality.

C. Chapman et al. / International Journal of Project Management 24 (2006) 106–115 111

10–20 week possibility, or its 0.1–0.5 probability range,and to consider responses to �normal� and �abnormal�variability in terms of contingency actions and preventa-tive actions. This could employ a range of well estab-lished decision tree and project risk managementmodels [9,17,21–26].

We might be able to judge in advance that the modelsbeyond the first pass outlined above are the most effec-tive place to start. We might use intermediate alterna-tives directly or iteratively, including conventionalmodels like the Beta distribution approximation associ-ated with PERT [17]. The key point about the construc-tively simple approach for present purposes is the notionthat �there is no theoretically correct model, and there isno one best operational model�. We need to make sub-jective choices about operational models, and we needto change our choices as an analysis proceeds and it be-comes clear that an area of uncertainty is more impor-tant than we first thought.

5. Understanding the conditional nature of estimates

If any estimate involves assumptions which may notbe true, the conditional nature of the estimate, in termsof its dependence upon those assumptions being true,may be very important. Treating such an estimate as ifit were unconditional (i.e., not dependent on anyassumptions being true) may involve a serious misrepre-sentation of reality. Unfortunately, there is a commontendency for assumptions underpinning estimates to besubsequently �over-looked� or not made explicit in thefirst place. This tendency is reinforced in the contextof evaluating the combined effect of uncertainty aboutall activities in a project. Often this tendency is con-doned and further reinforced by bias driven by a �con-spiracy of optimism�. Such treatment of assumptions isespecially likely where people do not like uncertaintyand they prefer to not see it, a very common situation.The presence of a conspiracy of optimism is more thanenough to make this issue crucial in the formulation ofestimates. If messengers �get shot� for telling the truth,

people will be motivated to be economical with thetruth.

Understanding the conditional nature of estimates isparticularly important when estimates prepared by oneparty are used by another party, especially when con-tractual issues are involved. To continue with our simpleexample, suppose the project manager concerned withestimating the approval duration used one of the severalapproaches outlined. How should the �the head office�,the �customer�, or any other party who is a �user� ofthe project manager�s estimates, interpret the projectmanager�s estimate of project duration?

The user would be wise to adjust the project man-ager�s estimate to allow for residual uncertainty due tothree basic sources:

One problem is that adjusting estimates to allow forthese sources of uncertainty often involves greater sub-jectivity than that involved in producing the estimatesin question. This is an especially acute problem if �objec-tive estimates� are used which are irrational. User re-sponse to this problem varies. One approach is tocollude, and make no adjustments since there is noobjective way to do so. Such a response may reinforceand encourage any conspiracy of optimism or require-ment for the appearance of objectivity in future estimat-ing. Another response is to demand more explicitdetailed information about assumptions and the poten-tial limitations of estimates. However, unless this leadsto more detailed scrutiny of estimates and further anal-ysis, it does not in itself lead to changes in estimates. In-deed it may encourage the previously mentionedpractice of �conditional estimate cop-outs�, especially ifidentified assumptions upon which an estimate dependsbecome numerous and so less likely to be scrutinised andtheir implications less likely to be explored. A third re-sponse, which is very common, is for users of estimatesto make informal adjustments to estimates, although thereasons for these adjustments may not be clearly articu-lated. For example, project managers may treat cost orduration estimates as pessimistic and set deliberatelytight performance targets to compensate. A well-knownconsequence of this is the development of a viciouscircle in the production and interpretation of estimates,

112 C. Chapman et al. / International Journal of Project Management 24 (2006) 106–115

whereby the estimator attempts to compensate for theuser�s anticipated adjustments, while suspicion of thispractice encourages the estimate user to make increasedadjustments to estimates. If several estimators are in-volved, and estimates combined in a nested fashion,the scope for uncertainty about how realistic aggregatedestimates are can be considerable. A current controversyin the UK centred on this issue is the use of genericadjustments to cost estimates as proposed by the Trea-sury [27] and the Department of Transport [28]. Toadjust for the historically observed optimistic bias inproject cost estimates, these organisations have pro-duced statistical estimates of optimistic bias by projecttype. It is argued that these estimates of optimistic biasshould be used directly as a scaling factor on all futurecost estimates unless the process used to produce theestimate warrants lower adjustment. No clear advice isprovided on the adjustment process in [27], and suchadjustments are discouraged in [28].

6. Adjusting estimates for residual uncertainty

This statistical attempt to arrive at a suitable genericscaling factor is an example of �irrational objectivity�, be-cause it fails to consider underlying mechanisms for costestimation or address the three sources of residualuncertainty associated with any estimate. We argue thata constructively simple approach, which then appliessubjective scaling factors for �known unknowns� (Fk),�unknown unknowns� (Fu) and �bias� (Fb), offers a morerational method for adjusting estimates (E).

Each scaling factor will itself be uncertain in size.Each factor is 1 ± 0 if a negligible adjustment effect is in-volved, but expected values different from 1 for each fac-tor and an associated rational subjective probabilitydistribution for each factor with a non-zero spread willoften be involved. Factors for Fk, Fu, or Fb < 1 will thensignify a downward adjustment to an estimate E, whileFk, Fu, or Fb > 1 signifies an upward adjustment.

To test the validity of the project manager�s estimateof project duration as a whole, and to maintain simplic-ity, suppose the user of this estimate takes a sample ofone activity estimate, and selects the estimated durationof design approval for this purpose.

Consider first the adjustment factor Fk for �knownunknowns� which incorporates any explicit assumptionswhich matter. If the project manager has followed theconstructively simple approach (as per Table 3) andlisted sources of uncertainty embodied in the �normal sit-uation� and �abnormal situation�, these lists look appro-priate, and the quantification of associated uncertaintylooks appropriate, a negligible adjustment for �knownunknowns� may be appropriate, and an Fk = 1 ± 0may be reasonable. However, if the project managerhas recorded a conditional estimate cop-out for the ap-

proval duration of 3 weeks, this should suggest an ex-pected value for Fk greater than 2 with an anticipatedoutcome range 1–10 if the user is familiar with data likethat of Table 1 and analysis like that of Table 3. Itwould be irrational for the user to fail to make suchan adjustment.

Similarly, an Fu = 1 ± 0 may be reasonable if the pro-ject manager made a provision for �unknown unknowns�when quantifying approval duration estimates in aTable 3 format which the user deems suitably conserva-tive in the light of the quality of the identification of ex-plicit assumptions. In contrast, an expected Fk greaterthan 2 with an anticipated outcome range 1–10 may sug-gest comparable values for Fu, depending upon theuser�s confidence about the Fk estimation and the qualityof the project manager�s estimate more generally. It isimportant to recognise that the quality of an estimatingprocess has to be judged as part of the process of inter-preting an estimate. The quality of the process for deal-ing with known unknowns is a key indicator of theextent to which unknown unknowns should beanticipated.

In respect of any adjustment for residual systematicestimation errors or �bias�, setting Fb = 1 ± 0 may bereasonable if Fk = 1 ± 0 and Fu = 1 ± 0 seem sensibleestimates, reasonable assumptions about statisticaldependence have been used in the evaluation process,and the organisation involved has a history of no bias.However, if estimates of design approval duration arethought to be understated relative to recent organisa-tional history, or statistical independence is assumed inthe evaluation process without good reason, a suitablylarge Fb expected value and associated spread is war-ranted. Overstatement of uncertainty may also requireadjustment.

Estimating scaling factors should depend to someextent on how they will be combined. The expectedvalues of the scale factors might be applied to theconditional expected value of an estimate E to obtainan adjusted expected value Ea in a number of ways[24]. The authors� preference is a multiplicative ap-proach, assuming Ea = E Fk Fu Fb. This is the mostconservative choice, assuming the adjustments shouldoperate in a cumulative fashion, and it is operationallythe simplest. The product Fk Fu Fb constitutes a single�cube� factor, short for �known unknowns�, �unknownunknowns� and �bias� (kuuub or KUUUB), conve-niently designated F3.

Large F3 values will seem worryingly subjective tothose who cling to an �irrational objectivity� perspective.However, explicit attention to F3 factors is an essentialpart of a �rational subjectivity� approach. It is seriouslyirrational, in Simon�s procedural rationality sense [10],to assume F3 = 1 ± 0 without sound grounds for doingso. At present, most organisations fail this rationalitytest. This is a cultural problem which has clear links to

C. Chapman et al. / International Journal of Project Management 24 (2006) 106–115 113

the uncertainty, model and probability concept issues asdiscussed earlier.

The key value of explicitly quantifying F3 in a decom-posed form is forcing those involved to think about theimplications of the factors which drive the expected sizeand variability of F3. Such factors may be far moreimportant than the factors captured in a prior conven-tional estimation process, where there is a natural ten-dency to forget about conditions and assumptions andfocus on the numbers. Different parties may have differ-ent views about an appropriate F3, but the process ofdiscussion should be beneficial. If an organisation re-fuses to acknowledge and estimate F3 explicitly, the is-sues involved do not go away, they simply becomeunmanaged risks, and many of them will be bettingcertainties.

For the most part, high levels of precision in F3 fac-tors and component factors are not practicable orneeded. The reason for sizing F3 factors is �insight notnumbers�. However, more developed versions explicitlyrecognising subjective probability distributions for F3

and its components are feasible, and may be appropriatein subsequent estimation or modelling iterations wherethis is constructive [14].

The size of appropriate F3 factors is not just a simplefunction of objective data availability and the use of sta-tistical estimation techniques. It is a function of thequality of the whole process portrayed by Fig. 1. In aproject management context it will include issues drivenby factors like the nature of the intended contracts. Inpractice, if one sampled activity duration estimate, suchas duration of design change approval, yields a cube fac-tor significantly greater than 1, this ought to promptscrutiny of other activity estimates and the role of theestimates in a wider context. Conversely, if no sampleactivity estimates are examined, this ought to lead to alarge F3 value for a whole-project estimate, given thetrack record of most organisations. Project teams andall users of their estimates need to negotiate a jointly-optimal approach to producing original estimates andassociated F3 factors. Any aspect of uncertainty whichis left out by an estimate producer which is of interestto an estimate user should be addressed in the user�s F3.

Interpreting another party�s parameter estimates al-ways requires explicit consideration of an F3 factor, evenif simple deterministic estimates are involved, and riskmanagement is entirely invisible. Estimators and usersof estimates who do not have an agreed approach toF3 factors are communicating in an ambiguous fashionwhich is bound to generate mistrust. With trust beinglinked to levels of project performance [29,30], it isimportant to nurture a high trust relationship betweenparties through open and honest communications whenconsidering the size of F3 factors.

A comprehensive approach to project risk anduncertainty management has to address cube factors

defined in this framework for all relevant measuresof project performance. If, for example, a failure toconsider responses to delays when estimating time riskmeans time risk is overestimated, cost risk will beunderestimated because the additional funds requiredto respond to delay will be over-looked. Further, ifthe contracts used do not align the objectives of theparties in a congruent manner, any parameter esti-mates which do not incorporate a full reflection ofthe consequences will be unreliable. If, for example,the design chosen cannot deliver what all parties ex-pect, any parameter estimates which do not includea full reflection of the consequences will be unreliable.The same is true of assumed resource availability, per-formance assumptions, timing assumptions, and activ-ity-based task structure assumptions.

7. Conclusion

Uncertainty and probability are seen as incompatibleconcepts by many who are uncomfortable with subjec-tive probabilities, a line of thinking often associated withKnight [31]. The approach to estimation and evaluationoutlined here is built upon a generalisation of Raiffa�sview [9] that probabilities can embrace uncertainty; theyare necessarily subjective, and they are models which in-volve successive levels of complexity to approach �thetruth� in terms of a complete understanding of the situ-ation being addressed. For example, Raiffa discusses�second order probability models� which address uncer-tainty about probabilities, although he does not exploitthis notion directly. The constructively simple approach,which is summarised in Fig. 4, dismisses the classicalobjective view of probabilities as necessarily data basedin relation to a single model which is assumed to be�true�. Both the classical approach and the constructivelysimple approach accept that �the truth� is unknowable,but the classical approach looks to more data for moreunderstanding, while the constructively simple approachlooks to a deeper modelling structure and the input ofmore people who understand some aspects of what isgoing on, plus more data at an appropriate level ofstructure if it is available, with a view to a richer inter-nally consistent synthesis of subjective and objectiveinformation.

All uncertainty can be embraced in a simple rangeestimate like that of Table 2. As �normal� and �abnormal�variability are distinguished, then different sources ofabnormal variability are identified, we become more in-formed about the nature of the uncertainty we face. Thismakes us better able to manage, in terms of responsesspecific to particular sources of uncertainty andresponses which are general sources of flexibility. Aniterative estimation and evaluation process has very littleto do with �objectivity� in the classical sense. It has

Start

First passUniform distribution -‘ simple’

expected value and variability estimate

Second pass‘simple’ exploration of causality and probability of ‘abnormal’ variability

Is estimate fit for purpose?

Further pass(es)Apply more complex models and

different approaches, involve more people, further explore ‘abnormal’durations and probability. Develop

contingencies and preventative actions

Is estimate fit for purpose?

‘End-user’ applies F3 factor to adjust for residual uncertainty

End

Yes

No

No

Yes

Yes

Is estimate fit for purpose?

Historical data andbrainstorm output

No

Fig. 4. Summary of the �constructively simple� approach to estimation and evaluation.

114 C. Chapman et al. / International Journal of Project Management 24 (2006) 106–115

everything to do with �rationality� in Simon�s sense [10].It is built upon decision analysis concepts which socialscientists have embraced for decades. However, an iter-ative approach to estimation and evaluation with a sub-jective basis is very uncomfortable for engineers andscientists who have not confronted the limitations ofobjective probability theory. Most project managersand many others involved in project management haveengineering or science backgrounds, and a �hard science�perspective. Consequently, whether projects persist witha one pass approach employing the conventional objec-tive perspectives outlined earlier or move towards aniterative approach based on a subjective perspective likethe constructively simple approach just outlined is to a

significant extent a cultural issue. Acceptance of the sub-jective nature of rational probability estimates is at theheart of the issue.

What the simple activity duration estimation exampledescribed in this paper illustrates is that even in a verysimple context, culture influences our perception ofparameter uncertainty. Uncertainty and culture are con-nected in a manner revealed by a constructively simpleframework because a constructively simple approachwas designed to manage all uncertainty, including as-pects driven by culture. If deterministic or conventionalrisk management approaches are used, this connectionmay not be visible and so it cannot be managed. A seri-ous �conspiracy of optimism� and endemic �irrational

C. Chapman et al. / International Journal of Project Management 24 (2006) 106–115 115

objectivity� are usually drivers of seriously dysfunctionalorganisational decision making. There are other culturalconditions of concern, but these two are particularlycommon, and especially perverse.

Interpreting other peoples� estimates of projectparameters and associated modelling assumptions hasto address questions of trust, integrity, individual abilityto deliver on promises, collective teamwork, imagina-tion, creativity, ability to improvise under pressure,and so on. A broadly defined approach to uncertaintymanagement using a constructively simple paradigmcan help to manage these issues, but for people to becomfortable with this approach a significant culturechange is necessary in most organisations. We canchange an organisation�s processes to change the cul-ture, but at a broad �everyone involved� level it maymake sense to start by trying to change the corporateculture directly to some extent. The authors hope thatexposure to the example developed in this paper mightserve as a first step to achieving this goal for someorganisations.

References

[1] Harrison R. Understanding your organization�s character. Har-vard Business Rev 1972;50(23):119–28.

[2] Daellenbach HG, George JA. Introduction to operations researchtechniques. Boston: Allyn and Bacon; 1978.

[3] APM, Association for Project Management. Risk analysis andmanagement (PRAM) guide. High Wycombe: APM Publishing;2004.

[4] PMI, Project Management Institute. A Guide to the projectmanagement book of knowledge: PMBOK (project managementbook of knowledge) guide. Upper Darby, PA: PMI; 2000 [chap-ter 11].

[5] ICE , FA and IA, Institution of Civil Engineers, Faculty ofActuaries and Institute of Actuaries. RAMP risk analysis andmanagement for projects. London: Thomas Telford; 2002.

[6] OGC, Office of Government Commerce. Management of risk(MoR): Guidance for practitioners. London: The StationaryOffice; 2002.

[7] Standards Australia and Standards New Zealand. AS/NZS4360:2004 Risk Management. www.standards.com.au 2004.

[8] Ward SC, Chapman CB. Transforming project risk managementinto project uncertainty management. Int J Proj Manage2003;21:97–105.

[9] Raiffa H. Decision analysis: introductory lectures on choicesunder uncertainty. Reading, MA: Addison Wesley; 1968.

[10] Simon HA. Rational decision-making in business organisations.Am Econ Rev 1979;69:493–513.

[11] Ward SC. Arguments for constructively simple models. J Opl ResSoc 1989;40:141–53.

[12] Chapman CB, Ward SC. Estimation and evaluation of uncer-tainty: a minimalist first pass approach. Int J Proj Manage2000;18:369–83.

[13] Chapman CB, Ward SC. Constructively simple estimating: aproject management example. J Opl Res Soc 2003;54:1050–8.

[14] Chapman CB, Ward SC. Managing project risk and uncertainty:a constructively simple approach to decision making. Chiches-ter: Wiley; 2002.

[15] Deal T, Kennedy A. The new corporate cultures: revitalising theworkplace after downsizing, mergers and reengineering. Lon-don: TEXERE Publishing Ltd; 1999.

[16] Hoel PG, Jessen RJ. Basic statistics for business and econom-ics. New York: Wiley; 1971.

[17] Moder JJ, Philips CR. Project Management with CPM andPERT. New York: Van Nostrand; 1970.

[18] Kahneman D, Slovic P, Tversky A. Judgement under uncertainty:heuristics and biases. Cambridge, MA: Cambridge Press; 1982.

[19] Oldfield A, Ocock M. Managing project risks: the relevance ofhuman factors. Int J Proj Business Risk Manage1997;1(2):99–109.

[20] March JG, Shapira Z. Managerial perspectives on risk and risktaking. Manage Sci 1987;33(11):1404–16.

[21] Cooper DF, Chapman CB. Risk analysis for large pro-jects. Chichester: Wiley; 1987.

[22] Grey S. Practical risk assessment for project management. Chich-ester: Wiley; 1995.

[23] Williams T. Modelling complex projects. Chichester: Wiley;2002.

[24] Chapman CB, Ward SC. Project risk management: processes,techniques and insights. 2nd ed. Chichester: Wiley; 2003.

[25] Cooper DF, Grey S, Raymond G, Walker P. Managing risk inlarge projects and complex procurements. Chichester: Wiley;2004.

[26] US Nuclear Regulatory Commission Reactor Safety Study. Anassessment of accident risk in US commercial nuclear powerplants, WASH-1400, NUREG-75/014, 1975.

[27] Treasury HM. The Green book, Appraisal and evaluation incentral government, HM Treasury, 1 Horse Guards Road, SW1A2HQ London, 2003.

[28] Flyvbjerg B. In association with COWI. The British Departmentfor Transport, Procedures for Dealing with Optimism Bias inTransport Planning, Guidance Document. Report 58924, Issue 1,Department for Transport, London, 2004.

[29] Kadefors A. Trust in project relationships – inside the black box.Int J Proj Manage 2004;22:175–82.

[30] Munns AK. Potential influence of trust on the successfulcompletion of a project. Int J Proj Manage1995;13(1):19–24.

[31] Knight F. Risk, uncertainty and profit. Boston: HoughtonMiffon; 1921.