software development risks to project effectiveness
TRANSCRIPT
Software development risks to project e�ectiveness
James Jiang a,1, Gary Klein b,*
a Department of Computer Information Systems, College of Administration and Business, Louisana Tech University, P.O. Box 10318, Ruston,
LA 71272, USAb College of Business and Administration, The University of Colorado at Colorado Springs, 1420 Austin Blu�s Parkway, P.O. Box 7150,
Colorado Springs, CO 80933-7150, USA
Received 10 January 1999; received in revised form 21 January 1999; accepted 14 February 1999
Abstract
Controlling the risks to a software development project is a critical issue examined by information system researchers. Many of
the studies and methods to date examine the risks from the broad aspect of system success but attempt to promote activities related
to a narrow domain of risks. Our intent is to examine the impact of the spectrum of risks on di�erent aspects of system development.
We report the results of a survey of 86 project managers that indicate common aspects of project e�ectiveness are generally under
control, but are most e�ected by lack of expertise on the project team. Signi®cant relationships also show that lack of clear role
de®nition and con¯icts on the team are also elevated risks. Other items are not as critical or limited to a much smaller aspect of
e�ectiveness than overall success. This focusing in on the more important risk aspects will allow for more e�ective management of
the project and a narrowing of techniques to mitigate the signi®cant risks. Ó 2000 Elsevier Science Inc. All rights reserved.
1. Introduction
Software development projects often exceed time andbudgetary constraints and do not even meet all userrequirements (Weill and Broadbent, 1998; Barki et al.,1993; Nidumolu, 1995; Nidumolu, 1996; Lyytinen,1988). Software engineering researchers argue that var-ious development risks in software development are akey problem a�ecting performance. The purpose ofsoftware engineering is therefore to improve the soft-ware development process, particularly controlling thesource of project development risks, and produce qual-ity software for the organizations/users (Christopheret al., 1996; Pressman, 1992). The control is accom-plished using methods, tools and procedures that bringmore structure and formalism to the complexity andrisks of software development (Nidumolu, 1995, 1996).
To begin reducing the system development failurerate, software engineering and information systems (IS)researchers attempted to identify the various risk fac-tors. The risks that have been identi®ed by researchers
include: nonexistent or unwilling users, multiple users orimplementers, turnover among all parties, inability tospecify purpose or usage, inability to cushion the impacton others, loss or lack of support, lack of experience,technological complexity, the degree of novelty orstructure of the application, technological change, andproject size. These could signi®cantly impact the out-come of IS project development e�orts (Jones andHarrison, 1996).
After identifying risks in software development, manydi�erent risk control procedures and methods were pro-posed by IS researchers (Humphrey, 1989). Larson andGobeli (1989) proposed a matrix management techniquefor the project teamÕs structure. Blackburn et al. (1996)and Hauptman and Hirji (1996) argued the importanceof software process ¯exibility and concurrency. Carmeland Becker (1995) and Dion (1993) emphasized the im-portance of packaged software process predictability.Cusumano (1991) and Christopher et al. (1996) stated theneeds of reusability to reduce software developmentover-budget problems. Certainly these new developmentprocedures/methods are critical in reaching greater pro-ductivity, but no one has been able to link these newtechniques with successful systems. On the contrary,leading edge technology or development methods areperceived as increasing system risks rather than allevi-ating them (Humphrey, 1989; Jiang et al., 1998).
The Journal of Systems and Software 52 (2000) 3±10www.elsevier.com/locate/jss
* Corresponding author. Tel.: +1-719-262-3157; fax: +1-719-262-
3494.
E-mail addresses: [email protected] (J. Jiang), gklein@mail.
uccs.edu (G. Klein).1 Tel.: +1-318-257-3445; fax: +1-318-257-4253.
0164-1212/00/$ - see front matter Ó 2000 Elsevier Science Inc. All rights reserved.
PII: S 0 1 6 4 - 1 2 1 2 ( 9 9 ) 0 0 1 2 8 - 4
Linkages between the software development risks andthe various dimensions of system success (e.g., systemquality, cost, schedule) are generally overlooked in theIS literature. Yet, this is an important step for advancingour knowledge on project risks because it is very likelythat di�erent project risks may a�ect the various di-mensions of system success di�erently. For example, thelack of user support and the lack of user experience onthe new technology may a�ect the impact of system usemore signi®cantly than other risk factors. On the otherhand, the application complexity and the lack of de-velopment teamÕs experience on the task may a�ect theoverall cost and schedules more signi®cantly than otherrisk factors. A particular control procedure or methodcan only reduce certain aspects of software developmentrisk but not others. For example, the use of structuraldesign methods may reduce the risk of applicationcomplexity but may not signi®cantly a�ect the usersupport or management commitment. The linkages be-tween risks and various dimensions of system successcan help project managers to select the needed imple-mentation strategies to achieve their desire project out-comes.
The purpose of this study is to relate various softwaredevelopment risks on project e�ectiveness. Speci®cally,the 12 most common software development risks pro-posed by Barki et al. (1993) will be examined. It includesproject size, technical complexity, extent of change, re-source insu�ciency, lack of general expertise, lack ofuser experience, lack of user support, intensity of con-¯icts, lack of clarity of role de®nitions, and task com-plexity. Project e�ectiveness is the speci®c dimension ofthe system performance that will be examined in thisstudy. The measures include meeting project budget andschedules, amount of work, quality of work, and oper-ation e�ciency. Project e�ectiveness is examined be-cause most of the reported project failures fail in eitherschedule or budget.
2. Related studies
System development projects frequently fall short ofmeeting their anticipated results. Projects come inyears behind schedule, exceeded their budget by mil-lions, and failed to meet their usersÕ needs even ifcompleted eventually (Weill and Broadbent, 1998;Cafasso, 1994; Barki et al., 1993; Nidumolu, 1995;Nidumolu, 1996). To increase the chance of systemsuccess, many IS researchers have attempted to iden-tify the potential factors that threaten successful pro-ject development. A review of identi®ed IS project riskvariables is listed in Table 1. While examined by someauthors (Barki et al., 1993), the concept of projectdevelopment risk in the IS domain still lacks a gen-erally accepted means of assessment. An important
advance step in risk-based research is that Barki et al.(1993) proposed an initial measure for this construct.The construct incorporated the many aspects of soft-ware engineering risks as well as those along mana-gerial and political dimensions.
Meanwhile, a large number of system performancecriteria have been developed and empirically tested, in-cluding IS usage, users information satisfaction, qualityof decision making, productive from cost/bene®t anal-ysis, team e�ectiveness, and project e�ectiveness. Due tothe di�culty in quantifying and linking costs and ben-e®ts to particular system development, measures basedon user perceptions have become particularly prominentin IS literature. However, many other researchers arguethat system success is a multidimensional concept. Forexample, Delone and McLean (1992) proposed systemsuccess measures include system use, information qual-ity, system quality, user satisfaction, individual impact,and organizational impact.
Saarinen (1996) proposed a system success measurewith four dimensions: system development process,system use, system quality, and organizational im-pacts. The identi®cation of these distinct dimensions ofsystem performance illustrates that a project can beboth successful and unsuccessful at the same timedepending on the metric selected. It is conceivable thatproject team members would view a project as atechnical success though the ®nancial reality of theproject is that it failed to return the companyÕs in-vestment in it. In short, the system success can beclassi®ed into (1) business outcomes (impact to indi-vidual and impact to organization), (2) technical per-formance (system performance), (3) e�ciency (projectoperation e�ciency in terms of costs, time, and pro-ductivity), (4) user satisfaction, and (5) IS personalsatisfaction (IS professional growth, challenging, andinteresting).
The arguments for speci®c software developmentrisks on project success are largely based on anecdotalevidence or armchair theorizing. Empirical evidence onthe relationship between the risk and project e�ec-tiveness is rare and oftentimes fails to take into ac-count various risk factors that may hinder success.Will all the various software development risks havethe same a�ects on overall project e�ectiveness? If not,which risks are more in¯uential? Which risks havegreater impact on the schedule, budget, goals, e�-ciency, speed, quantity and quality of work performed?The answers to these questions can help IS practitio-ners better understand how the risks a�ect their projectperformance. In addition, appropriate procedures canthen be adopted to control these risks and achieve thedesire results. To answer these questions, a series ofstudies is needed. The scope of this exploratory studyincludes which project risks most a�ect project e�ec-tiveness.
4 J. Jiang, G. Klein / The Journal of Systems and Software 52 (2000) 3±10
3. Methods
The instrument used to measure the project devel-opment risks was adopted and shortened from Barkiet al. (1993). The items and categories of risk are inTable 2. The risks considered cross the range of those inTable 1. Of concern are the risks due to the technicalnature of the project, the risks associated with skill lacksamong the project team, resource limitations, personalcommunication issues, and failure to involve all theappropriate stakeholders. The questionnaire asks re-spondents the complexity and problems in their mostrecently completed IS projects. Each item was scoredusing a seven-point scale ranging from not at all (1) toextremely (7). Each category is an average of all itemslisted. All the risks were scaled so that the greater thescore, the greater the risk of the particular factor. Table 2also shows the mean responses to each of the categories.
One of the most powerful methods to test constructvalidity is factor analysis (Kerlinger, 1986). Principalcomponent analysis was conducted using all the itemslisted in Table 2. From the analysis, a twelve-factorsolution emerged using the eigenvalue greater than 1rule that accounted for 76% of the variance. The resultsare consistent with Barki et al.'s (1993) framework.CronbachÕs alpha was calculated to assess measurementreliability and are also reported in Table 2. All alphavalues are acceptable to high for survey based researchimplying an adequate level of internal consistency.
The instrument used to measure the project e�ec-tiveness was adopted from Larson and Gobeli (1989)and Larson (1997). The questionnaire asks respondentsthe extent of each project e�ectiveness item in their most
recently completed IS project. The items include e�-ciency considerations of amount and quality of work,adherence to schedules and budgets, speed and e�-ciency, and ability to meet goals. Each item was scoredusing a seven-point scale ranging from not at all (1) toextremely (7). Overall project e�ciency was calculatedby averaging across all items. All the items were scoredso that the greater the score, the greater the extent of thee�ciency. To examine the validity of the project successmeasure, we conduct a principal component analysis.The factor analysis produced one factor that accountedfor 84% of the total variance. Table 3 shows the items,the mean responses, and the alpha values. The alphavalue for the scale is 0.92.
3.1. Sample
In this study, questionnaires were mailed to 500 ISproject managers in the US. They were current membersof Project Management Institute (PMI). The sample waschosen because members of PMI have expertise onproject management and represent a wide variety ofmanagerial settings and have been widely used in projectmanagement research. Self-addressed return envelopesfor each questionnaire were enclosed. All the respon-dents were assured that their responses would be keptcon®dential. A total of 86 questionnaires were returnedfor a response rate of 18%. This response rate is con-sistent with other mail surveys. A summary of the de-mographic characteristics of the sample is presented inTable 4. The demographic characteristics of this samplewere similar to other studies that involved the PMImembers (Larson, 1997).
Table 1
Summary of the project development risk
Risk factors First authors (year)
Project size McFarlan (1981); Beath (1983); Neumann (1994); Zmud (1980); Barki et al. (1993)
Team size Alter (1979); McFarlan (1981); Casher (1984)
Technical complexity Beath (1983); Davis (1982); Zmud (1980); Boehm (1989); Barki et al. (1993)
Technical newness McFarlan (1981); Beath (1983); Barki et al. (1993)
Application newness Zmud (1980); Beath (1983)
Team diversity Alter (1979); Zmud (1980); Casher (1984)
Team expertise McFarlan (1981); Davis (1982); Boehm (1989)
(Project development) Barki et al. (1993)
Team expertise Alter (1979); Anderson et al. (1979); Davis (1982);
(Application) Boehm (1989); Jiang et al. (1996)
Team expertise (Task) Anderson et al. (1979); Neumann (1994); Boehm (1989)
Team expertise Anderson et al. (1979); Boehm (1989); Davis (1982);
(General) Jiang et al. (1996); Barki et al. (1993)
Experience of leader Anderson et al. (1979); Boehm (1989)
Number of users Alter (1979); Alter and Ginzberg (1978); Davis (1982); Barki et al. (1993)
Diversity of users Alter (1979); Davis (1982); McFarlan (1981)
User attitudes Alter (1979); Beath (1983); Anderson et al. (1979); Jiang et al. (1996)
Top management support Anderson et al. (1979); Barki et al. (1993); Jiang et al. (1996)
Resource insu�ciency Boehm (1989); Alter (1979); Anderson et al. (1979)
Task characteristics McFarlan (1981); Alter (1979)
Con¯icts Anderson et al. (1979); Casher (1984); Barki et al. (1993)
J. Jiang, G. Klein / The Journal of Systems and Software 52 (2000) 3±10 5
The sample population appears to add to the credi-bility of the study as over 80% of the respondents heldmanagerial IS positions- project leaders, IS managers, orIS executives. Over 90% of them had more than ®veyears of IS work experience, and just under 92% workedin organizations which had 100 employees or more. Themajority had experience in a wide range of applications.To mitigate concerns for bias introduced by the sample,each of the demographic variables was run in a separateANOVA as the independent variable. A signi®cant re-lationship to the independent variables of project e�-ciency measure would indicate a possible bias. Nodemographic variable was found to have a signi®cantrelation to the project e�ciency measures. This leads usto believe the sample population introduced no bias intothe study.
4. Data analysis and results
The mean responses to the categories of risk and theitems of project e�ectiveness are in Tables 2 and 3,respectively. The sample population generally foundthat the risks associated with system development didnot impact their more recent project to a great extent.Near the low end of risk associated di�culties werethose associated with con¯icts and roles of the teammembers. Di�culties due to complexity and size of the
Table 2 (Continued)
(3) Communications between those involved in the project
are unpleasant
Application complexity �a � 0:77; mean � 4:26�(1) Technical complexity (e.g., hardware, software, database)
(2) Large number of links to existing systems
(3) Large number of links to future systems
Lack of user experience �a � 0:88; mean � 3:40�(1) Users are not very familiar with system development
tasks
(2) Users have little experience with the activities to be
supported by the future applications
(3) Users are not very familiar with this type of application
(4) Users are not aware of the importance of their roles in
successfully completing the project
(5) Users are not familiar with data processing as a working
tool
Table 2
Project risk categories and items
Technological acquisition (Cronbach's Alpha �a� � 0:76; mean
� 3:30 out of 7)
(1) Need for new hardware
(2) Need for new software
(3) Large number of hardware suppliers
(4) Large number of software suppliers
(5) Large number of external users will use the system
Project size �a � 0:86; mean � 4:2�(1) Large number of people on team
(2) Large number of di�erent ``stakeholders'' on team (e.g., IS
sta�, users, consultants, suppliers, customers)
(3) Large project size
(4) Large number of users will be using this system
(5) Large number of hierarchical levels occupied by users who
will be using the system (e.g., o�ce clerks, supervisors)
Lack of teamÕs general expertise (a�who will be using the system
(e.g., o�ce clerks, supervisors) 0.85, mean� 3.72)
(1) Ability to work with uncertain objective
(2) Ability to work with top management
(3) Ability to work e�ectively as a team
(4) Ability to understand human implications of a new system
(5) Ability to carry out tasks e�ectively
Lack of teamÕs expertise with the task �a � 0:86; mean � 3:49�(1) In-depth knowledge of the functioning of user department
(2) Overall knowledge of organizational operations
(3) Overall administrative experience and skill
(4) Expertise in the speci®c application area of the system
(5) Familiar with this type of application
Lack of teamÕs development expertise �a � 0:82; mean � 3:27�(1) Development methodology used in this project
(2) Development support tools used in this project
(3) Project management tools used in this project
(4) Implementation tools used in this project
Lack of user support �a � 0:90; mean � 3:57�(1) Users have a negative opinion about the system meeting their
needs
(2) Users are not enthusiastic about the project
(3) Users are not an integral part of the development team
(4) Users are not available to answer the questions
(5) Users are not ready to accept the changes the system will entail
(6) Users slowly respond to development team request
(7) Users have negative attitudes regarding the use of computers
in their work
(8) Users are not actively participating in requirement de®nition
Intensity of con¯icts �a � 0:73; mean � 3:00�(1) Great intensity of con¯icts among team members
(2) Great intensity of con¯icts between users and team members
Extent of changes brought �a � 0:68; mean � 3:79�(1) The system requires a large number of users tasks be modi®ed
(2) The system will lead to major changes in the organization
Resources insu�cient �a � 0:78; mean � 3:84�(1) In order to develop and implement the system, the scheduled
number of people-day is insu�cient
(2) In order to develop and implement the system, the dollar
budget provided is insu�cient
Lack of clarity of role de®nitions �a � 0:85; mean � 3:25�(1) Role of each member of the team is not clearly de®ned
(2) Role of each person involved in the project is not clearly
de®ned
Table 3
Project e�ectiveness itemsa
Item Mean
(1) Ability to meet project goals 4.78
(2) Amount of work produced 5.07
(3) Quality of work produced 5.01
(4) Adherence to schedules 4.15
(5) E�ciency of operations 4.13
(6) Speed of operations 4.21
(7) Adherence to budgets 4.60a Project e�ectiveness (a� 0.92, mean� 4.57).
6 J. Jiang, G. Klein / The Journal of Systems and Software 52 (2000) 3±10
system seem to have been the ones encountered themost, while all others were still experienced to a mod-erate degree. In terms of performance, the projectmanagers were generally satis®ed with the project ef-fectiveness outcomes associated with e�ciency, speedand schedules pulling up the rear.
To answer the question which project risk variablesare more important in in¯uencing project e�ciency, amultiple regression analysis was applied to investigatethe relationships between the project e�ciency and theproject risk factors. Appropriate assumption tests forthis technique (i.e., normality, multicollinearity, varianceequality, and linearity) were performed. No assumptionwas violated. The R-square indicates that 43% of thevariance in overall project e�ciency is explained by theregression model, high for behavioral constructs. Resultsare in Table 5.
The overall P-value (0.00) indicates that there is asigni®cant relationship between project risk variablesand project e�ciency. The individual project risk vari-ablesÕ P-value indicates that the lack of project teamÕsgeneral expertise (0.00), intensity of con¯icts (0.04), andlack of clarity of role de®nition among team members(0.01) are most signi®cantly related to the overall projecte�ciency. The remaining risk factors did not relate tothe overall measure of e�ectiveness. The result supportsthe proposition that certain project risk variables aremore important than the other project risks in in¯u-encing project e�ciency.
In addition, a total of seven independent multipleregression analyses was conducted to examine each itemof project e�ectiveness. The results are summarized inTable 6. The lack of teamÕs general expertise is signi®-cantly related to each single project e�ciency item. Onthe other hand, lack of role clarity is most signi®cantlyrelated to most project e�ciency items. Intensity ofcon¯icts and lack of user experience also contribute toproblems in the quality of the work produced. No otherrisk factor relates signi®cantly to any of the items ofe�ectiveness. The other risks are either under control orpossibly dominated by the consequences of those indi-cated.
5. Summary and discussion
The results of the survey indicate that di�erent pro-ject risks impact di�erent aspects of system developmentto di�ering extents. The data, in general, indicate goodcontrol over risk factors. However, the focus on projecte�ectiveness measures revealed that two common riskshave a more signi®cant impact on e�ectiveness: lack ofgeneral expertise on the team and lack of clear rolede®nitions for team members. Intensity of con¯ictamong project team members also is signi®cant alongthe quality of work dimension.
Table 4
Sample demographics
Position
IS Executive 8
IS Manager 12
IS project leader 44
System analysts 3
Others 15
Not reported 4
86
Work experience
1±5 years 5
6±10 years 12
11±15 years 20
16±20 years 20
21±25 years 8
26 or above 17
Not reported 4
86
Gender
Male 65
Female 18
Not reported 3
86
Age
25±30 years/old 5
31±35 years/old 10
36±40 years/old 21
41±45 years/old 15
46±50 years/old 14
51±55 or above 14
Not reported 7
86
Organization size (number of employees)
Under 100 employees 7
100±500 employees 12
500±1000 employees 7
1000±2500 employees 10
2500±5000 employees 6
5000±10 000 employees 22
10 000 or more employees 20
Not reported 2
86
Experience in di�erent kinds of applications
1±2 areas 11
3±4 areas 29
5±6 areas 18
7±8 areas 9
9 areas or more 17
Not reported 2
86
Average number of members of IS projects participated
2±3 members 1
4±5 members 13
6±1 members 22
11±15 members 12
16±20 members 15
21±25 members 18
Not reported 5
86
J. Jiang, G. Klein / The Journal of Systems and Software 52 (2000) 3±10 7
The lack of teamÕs general expertise is composed ofthe ability to work with uncertain objectives, ability towork with top management, ability to work e�ectivelyas a team, ability to understand human implications of anew system, and ability to carry out tasks e�ectively.These are interpersonal and team skills that can beconsidered during the formation of the project team. Ifinsu�cient skills are available in the pool of developers,then it would be worthwhile to perform some training inthese areas before the commencement of the project.
Skills can be addressed from a number of viewpoints,but overall, management should communicate the basicproject parameters and management guidelines to theproject team to allow for skill matching. The generalskills lacking are more along the lines of workinge�ectively as a team. Thus, building of team skills shouldbe conducted by project managers throughout the pro-
ject lifecycle. An especially intense e�ort might beneeded during the team formation stage. The team canbe brought together to discuss such team operations,strengths, and directions for improvement. Also up fordiscussion should be problems and issues likely to befaced and how the di�culties may be avoided. Projectteams should obtain user participation and obtain usercommitment. It is di�cult to obtain commitment with-out a kind of quid pro quo or demonstration that thesystem will help the users.
Lack of clarity of role de®nition includes the role ofeach member of the team not being clearly de®ned, therole of each person involved in the project is not clearlyde®ned, and communications between those involved inthe project are unpleasant (a side e�ect of the ®rst twoitems). Many of the concerns involved with this aspectcan be addressed at the organizational level to provide
Table 6
Summary of relationships between risk factors and project e�ectiveness measures
Overall Project
goals
Amount of
work
Quality of
work
Meet
schedules
Operation
e�ciency
Operation
speed
Meet
budgets
Technological acquisition
Application size
Lack of team's general
expertise
Sig.
P� 0.00
Sig.
P� 0.00
Sig.
P� 0.01
Sig.
P� 0.00
Sig.
P� 0.01
Sig.
P� 0.02
Sig.
P� 0.00
Sig.
P� 0.01
Lack of team's expertise
with task
Lack of team's
development expertise
Intensity of con¯icts Sig.
P� 0.04
Sig.
P� 0.04
Extent of changes
Lack of user support
Insu�cient resource
Lack of role clarity Sig.
P� 0.01
Sig.
P� 0.00
Sig.
P� 0.01
Sig.
P� 0.04
Sig.
P� 0.03
Sig.
P� 0.04
Application complexity
Lack of user experience Sig.
P� 0.02
Table 5
Overall system success: results of multiple regression analysis
Dependent var. Independent var. T-value P-value
Overall project e�ectiveness Technological acquisition 1.47 0.14
Application size 0.97 0.33
Lack of teamÕs general expertise )3.97 0.00�
Lack of teamÕs expertise with the task 0.94 0.34
Lack of teamÕs development expertise 0.31 0.75
Lack of user support )0.53 0.59
Intensity of con¯icts 2.06 0.04�
Extent of changes 0.52 0.60
Resource insu�cient )0.71 0.48
Lack of clarity of role de®nitions )2.75 0.01�
Application complexity 0.14 0.88
Lack of user experience )1.66 0.10
Model F-value Prob. > F R-square
4.77 0.00* 0.43* P < 0.05 level.
8 J. Jiang, G. Klein / The Journal of Systems and Software 52 (2000) 3±10
more complete description and delineation of duties.Again, this is a risk best controlled prior to the initiationof the project (Roberts and Fusfeld, 1997).
To control for role de®nition risks, project managersshould negotiate with each prospective team member.The member selection process must include a clear dis-cussion of the speci®c task, expected outcomes, potentialrewards, timing, responsibilities, reporting relationships,and importance of the project. Assign members onlywhen must clarify the team structure and operatingconcepts are clear. The project plan, task matrix, projectcharter, and policy are principle tools in this task(Thamhain and Wilemon, 1987). Con¯icts and confu-sions among the team members are avoided when pro-ject managers clearly communicate to all team membersthe overall project objectives. Clear and frequent up-dates to top management and team members keep thealignment of roles straight.
These two risks are also intuitively related to theconcerns for e�ciency. Poor communication amonggroup members does not allow for the coordinationnecessary to conduct the individual tasks required tocomplete the project in an orderly fashion. Likewise, ifthe roles are not clearly de®ned, then much time will bespent on duplication of e�ort and the search for ap-propriate activities on the part of the group members.Unless the paths are clearly de®ned and the ideas are¯owing, progress will be toward each individual's goalrather than the project goal.
The more global measure, quality of work, is im-pacted by more of the risks than the other items. Here,lack of user experience and intensity of con¯icts are alsosigni®cantly related to satisfaction. User experience iscertainly a necessary element in meeting the require-ments for the system. Speci®cations derived and proto-types tested based on the input from inexperienced userscould clearly be faulty. Again, this is an item to beconsidered prior to the onset of the project. Intensity ofcon¯icts provides an interesting deviation from the re-maining results since control of con¯ict is one of theongoing responsibilities of the project manager. Someearly-on negotiating clear speci®cations will avoid somecon¯ict, but changing environments require that projectmanagers be prepared to monitor con¯ict and interveneif the con¯ict will impact the system quality.
It seems somewhat surprising that teamÕs expertisewith the task and lack of teamÕs development expertisewere not found to be signi®cant. However, the depen-dent variable examined in this study is project e�ec-tiveness only. Other dimensions of project success suchas system technical performance are not included in thisstudy. Perhaps the teamÕs development expertise andteam expertise with the task may be more related toother dimensions of project success. Once we have abetter handle on which risks are applicable to variousforms of success, then we may begin to examine
relationships between risk factors (will one risk triggerothers?), e�ect relationships (are there root causes ofrisks?), and how can we determine the appropriatemitigation strategy (which strategies are more e�ectivefor reducing the risks and thus increase the chance ofsystem success?).
References
Alter, S., 1979. Implementation risk analysis. TIMS Studies in
Management Science 13, 103±119.
Alter, S., Ginzberg, M., 1978. Managing uncertainty in MIS imple-
mentation. Sloan Management Review 20, 23±31.
Anderson, H., Narshimhan, R., 1979. Assessing project implementa-
tion risk: a methodological approach. Management Science 25,
512±521.
Barki, H., Rivard, S., Talbot, J., 1993. Toward an assessment of
software development risk. Journal of Management Information
Systems 10, 203±223.
Beath, C.M., 1983. Strategies for managing MIS: a transaction cost
approach. In: Proceedings of the Fourth International Confer-
ence on Information Systems. Houston, ACM/SIGBDP,
pp. 415±427.
Blackburn, J.D., Hoedemaker, G., Wassenhove, L.V., 1996. Concur-
rent software engineering: prospects and pitfalls. IEEE Transac-
tions on Engineering Management 43, 179±188.
Boehm, V.W., 1989. Software Risk Management. IEEE Computer
Society Press, Los Alamitos, CA.
Cafasso, R., 1994. Few IS projects come in on time, on budget.
Computerworld December 12, 20±21.
Carmel, E., Becker, S., 1995. A process model for packaged software
development. IEEE Transactions on Engineering Management
42, 50±61.
Casher, J.D., 1984. How to control risk and e�ectively reduce the
chance of failure. Management Review 73, 50±54.
Christopher, D., Mukhopadhyay, T., Goldenson, D., Kellner, M.I.,
1996. Software processes and project performance. Journal of
Management Information Systems 12, 187±205.
Cusumano, M., 1991. JapanÕs Software Factories: A Challenge to US
Management. Oxford University Press, Oxford, UK.
Davis, G.B., 1982. Strategies for information requirements determi-
nation. IBM System Journal 21, 4±30.
Delone, W.H., McLean, E.R., 1992. Information systems success: the
quest for the dependent variable. Information Systems Research
3, 60±95.
Dion, R., 1993. Process improvement and the corporate balance sheet.
IEEE Software 10, 28±35.
Hauptman, O., Hirji, K.K., 1996. The in¯uence of process concurrency
on project outcomes in product development: an empirical study
of cross-functional teams. IEEE Transactions on Software
Engineering 43, 153±164.
Humphrey, W., 1989. Managing the Software Process. Addison-
Wesley, Reading, MA.
Jiang, J., Klein, G., Balloun, J., 1996. Ranking of system implemen-
tation success factors. Project Management Journal 27, 50±55.
Jiang, J., Klein, G., Balloun, J., 1998. Perception of system develop-
ment failures. Information and Software Technology 39,
933±937.
Jones, M.C., Harrison, A.W., 1996. IS project team performance: an
empirical assessment. Information and Management 31, 57±65.
Kerlinger, F.N., 1986. Foundations of Behavioral Research. Holt-
Rinehart and Winston, New York.
Larson, E.W., Gobeli, D.H., 1989. Signi®cance of project management
structure on development success. IEEE Transactions on Engi-
neering Management 36, 119±125.
J. Jiang, G. Klein / The Journal of Systems and Software 52 (2000) 3±10 9
Larson, E.W., 1997. Partnering on construction projects: a study of the
relationship between partnering activities and project success.
IEEE Transactions on Engineering Management 44, 188±195.
Lyytinen, K., 1988. Expectation failure concept and system analystsÕview of information system failures: results of an exploratory
study. Information and Management 14, 45±56.
McFarlan, F.W., 1981. Portfolio approach to information systems.
Harvard Business Review 59, 142±150.
Neumann, S., 1994. Strategic Information Systems: Competition
Through Information Technologies. Macmillan, New York.
Nidumolu, S., 1995. The e�ect of coordination and uncertainty on
software project performance: residual performance risk as an
intervening variable. Information System Research 6, 191±219.
Nidumolu, S., 1996. A comparison of the structural contingency and
risk-based perspectives on coordination in software-development
projects. Journal of Management Information Systems 13,
77±113.
Pressman, R.S., 1992. Software Engineering: A PractitionerÕsApproach, Third ed. McGraw-Hill, New York.
Roberts, E.B., Fusfeld, A.R., 1997. Information leadership roles in the
innovation process. In: Katz (Ed.),The Human Side of Managing
Technological Innovation, Oxford, NY, pp. 273±303.
Saarinen, T., 1996. An expanded instrument for evaluating infor-
mation system success. Information and Management 31,
103±118.
Thamhain, H.J., Wilemon, D.L., 1987. Building high performing
engineering project teams. IEEE Transactions on Engineering
Management EM 34, 130±137.
Weill, P., Broadbent, M., 1998. Leveraging the New Infrastructure ±
How Market Leaders Capitalize on Information Technology.
Harvard Business School Press, Boston, MA.
Zmud, R.W., 1980. Management of large software development
e�orts. MIS Quarterly 4, 45±55.
Dr. James J. Jiang is an Associate Professor of Computer InformationSystems at Louisiana Tech University. His Ph.D. in InformationSystems was awarded by the University of Cincinnati. His researchinterests include project management and knowledge managementusing the Consonance approach. He has published over 50 academicarticles in these areas. He is a member of IEEE, ACM, and DSI.
Dr. Gary Klein is the Couger Professor of Information Systems at theUniversity of Colorado in Colorado Springs. He obtained his Ph.D. inManagement Science at Purdue University. Before that time, he servedwith Arthur Andersen & Company in Kansas City and was director ofthe Information Systems department for a regional ®nancial institu-tion. He was previously on the faculty at the University of Arizona,Southern Methodist University and Louisiana Tech University andserved as Dean of the School of Business at the University of Texas ofthe Permian Basin. His specialties include information system devel-opment and mathematical modeling with over 50 academic publica-tions in these areas.
10 J. Jiang, G. Klein / The Journal of Systems and Software 52 (2000) 3±10