preliminary studies on the development of a decision support system for evaluating engineering and...

6
Preliminary studies on the development of a decision support system for evaluating engineering and scientific software AVI RUSHINEK Department of Accounting, University of Miami, PO Box 248031, Coral Gables, FL 33124, USA. SARA F. RUSHINEK Department of Management Science and Computer Information Systems, University of Miami, Coral Gables, FL 33124, USA. The selection of an engineering and scientific soft- ware (ESS) system is a complicated process. The overall user satisfaction derived from a system depends on many variables. This study analyzes the influence of ESS predictor variables on overall satisfaction as determined by multiple regression. This study confirms the theories that suggest that ESS ease of operation, computer reliability and technical support are the major determinants of overal computer user satisfaction. The factors identified were used for the design of an ESS Decision Support System. This paper presents another option: software which has tabulated user ratings from online questionnaires and makes direct comparisons to industry standards. The original data, collected by an impartial company in an extensive survey, is used to analyze which variables con- tribute most to user satisfaction and to generate the smaller questionnaire for the interactive component. Information in this data base can be made available to prospective buyers, but even more importantly, would remain in place to moni- tor user satisfaction on an ongoing basis. This decision support system can help users isolate problem areas and suggest solutions while constantly updating the user satis- faction files through telecommunication networks. INTRODUCTION The information obtained by user ratings of engineering and scientific software (ESS) systems could be very useful to ESS buyers and sellers who would like to see some type of rating scale about these systems before deciding which type of system to buy or sell. Traditionally, buyers or sellers who are interested in evaluating overall user satisfaction of a potential new ESS system have two options. One option is to study the technical specifications of the different ESS systems and their respective user satisfaction reports. The disadvantages of this option are that the buyer or seller may not have the time, nor the technical expertise to understand the specifica- tions. Moreover, many user satisfaction studies of ESS systems are often incomplete, vague, inaccurate, subjective, ambiguous, non-quantitative, and/or most importantly, too narrow to be statistically generalizable. 2,3 Another option that is available to buyers and sellers of ESS systems is to hire consultants who can understand the technical specifications, discount inaccuracies and subjective judgments of trade publications, and most importantly use generic information to make suggestions custom-tailored to a particular installation. The main disadvantage of this option is that such experts are hard to come by, disruptive to the normal operation and rather expensive .4 Accepted May 1985. Discussion closes December 1985. THEORETICAL FOUNDATIONS AND LITERATURE REVIEW Theories of Consumer Satisfaction (CS) suggest that prior to purchase consumers form expectations concerning the future performance of products such as ESS systems. After using the products, the consumers compare the actual per- formance with the expected performance. This comparison leads to a confirmation or a disconfirmation of the ex- pectations, which in turn affects the level of satisfaction. The greater the confirmation the higher is the level of satisfaction. The greater the confirmation the higher is the satisfaction of the consumer, s-9 Accordingly, the authors hypothesize that a positive relationship exists between the degree of meeting user expectations, and the overall user satisfaction. Traditionally, there has been a clear distinction among micro, mini, and mainframe computers. The power and capabilities of ESS systems have improved over the years; memory costs have gone down and performance distinc- tions between different systems have blurred. This chal- lenges the traditional size distinction and its implications for evaluations. Thus, it is important to investigate if the traditional characteristics of size (whether it is a micro, mini, or mainframe) actally do have an effect on user satis- faction. Thus, one of the objectives of this study is to determine whether the size of the ESS computer has any effect on user satisfaction. Choosing the right software from the bewildering array of systems, manufacturers and different configurations of 0141-1195/85/040173-6 $2.00 © 1985 CML Publications Adv. Eng. Software, 1985, Vol. 7, No. 4 173

Upload: avi-rushinek

Post on 21-Jun-2016

214 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Preliminary studies on the development of a decision support system for evaluating engineering and scientific software

Preliminary studies on the development of a decision support system for evaluating engineering and scientific software

A V I R U S H I N E K

Department of Accounting, University of Miami, PO Box 248031, Coral Gables, FL 33124, USA.

S A R A F. R U S H I N E K

Department of Management Science and Computer Information Systems, University of Miami, Coral Gables, FL 33124, USA.

The selection of an engineering and scientific soft- ware (ESS) system is a complicated process. T h e overall user satisfaction derived from a system depends on many variables. This s t u d y ana lyzes the in f luence o f ESS p r e d i c t o r var iables on overal l satisfaction as determined by multiple regression. This study confirms the theories that suggest that ESS ease of operation, computer re l iabi l i ty and technical support are the major determinants of overal computer user sa t i s fac t ion . T h e factors identified were used for the design of an ESS Decision Support System.

This paper presents another option: software which has tabulated user ratings from online questionnaires and makes direct comparisons to industry standards. The original data, collected by an impartial company in an extensive survey, is used to analyze which variables con- tribute most to user satisfaction and to generate the smaller questionnaire for the interactive component. Information in this data base can be made available to prospective buyers, but even more importantly, would remain in place to moni- tor user satisfaction on an ongoing basis. This decision support system can help users isolate problem areas and suggest solutions while constantly updating the user satis- faction files through telecommunication networks.

INTRODUCTION

The information obtained by user ratings of engineering and scientific software (ESS) systems could be very useful to ESS buyers and sellers who would like to see some type of rating scale about these systems before deciding which type of system to buy or sell.

Traditionally, buyers or sellers who are interested in evaluating overall user satisfaction of a potential new ESS system have two options. One option is to study the technical specifications of the different ESS systems and their respective user satisfaction reports. The disadvantages of this option are that the buyer or seller may not have the time, nor the technical expertise to understand the specifica- tions. Moreover, many user satisfaction studies of ESS systems are often incomplete, vague, inaccurate, subjective, ambiguous, non-quantitative, and/or most importantly, too narrow to be statistically generalizable. 2,3

Another option that is available to buyers and sellers of ESS systems is to hire consultants who can understand the technical specifications, discount inaccuracies and subjective judgments of trade publications, and most importantly use generic information to make suggestions custom-tailored to a particular installation. The main disadvantage of this option is that such experts are hard to come by, disruptive to the normal operation and rather expensive .4

Accepted May 1985. Discussion closes December 1985.

THEORETICAL FOUNDATIONS AND LITERATURE REVIEW

Theories of Consumer Satisfaction (CS) suggest that prior to purchase consumers form expectations concerning the future performance of products such as ESS systems. After using the products, the consumers compare the actual per- formance with the expected performance. This comparison

leads to a confirmation or a disconfirmation of the ex- pectations, which in turn affects the level of satisfaction. The greater the confirmation the higher is the level of satisfaction. The greater the confirmation the higher is the satisfaction of the consumer, s-9 Accordingly, the authors hypothesize that a positive relationship exists between the degree of meeting user expectations, and the overall user satisfaction.

Traditionally, there has been a clear distinction among micro, mini, and mainframe computers. The power and capabilities of ESS systems have improved over the years; memory costs have gone down and performance distinc- tions between different systems have blurred. This chal- lenges the traditional size distinction and its implications for evaluations. Thus, it is important to investigate if the traditional characteristics of size (whether it is a micro, mini, or mainframe) actally do have an effect on user satis- faction. Thus, one of the objectives of this study is to determine whether the size of the ESS computer has any effect on user satisfaction.

Choosing the right software from the bewildering array of systems, manufacturers and different configurations of

0141-1195/85/040173-6 $2.00 © 1985 CML Publications Adv. Eng. Software, 1985, Vol. 7, No. 4 173

Page 2: Preliminary studies on the development of a decision support system for evaluating engineering and scientific software

components can be a frustrating and expensive experience for buyers and vendors alike, n If the parameters of ESS users' satisfaction could be maximized and the frustration level could be reduced, or at least controlled.

Buyers of ESS systems should look at advantages and/or disadvantages in cost, ease of operation, system reliability, and vendor reliability, such as established vending firms vs. newer and smaller firms. 12 Accordingly, the cost should be one of the most important determinants of user satisfaction. This may also suggest the inclusion of criterion variables that indicate the popularity of the vendor. Other such criterion variables include the number of systems, their average useful life, and the number of users that are using these systems.

In addition, due to the rapidly changing technology, management must be willing to commit time to the con- version of an outdated system. Thus the ease of conver- sion should be included in the study as an important deter- minant of user satisfaction. One can hypothesize that the easier the conversion process, the more satisfied the ESS users should be. Comparison of ESS systems should be done in the areas of support, service, ease of operation, compatibility and reliability of the computer, peripherals, compilers, and assemblers, as well as the cost of purchase and operation. 13 Therefore, the authors expect these vari- ables to be positively correlated with satisfaction.

The importance of having a written contract with the vendor has been discussed in the literature. 14 A thorough contract should cover reliability, performance, operating system compatibility, effectiveness, training, costs, and trouble shooting. Some studies cite maintenance, service, education, and documentation as the top concerns of ESS system users. Applications availability and reliability have been next highly rated with price being the most important criterion after that. is Therefore, such criterion variables have been incorporated into this model and are expected to positively correlate with overall satisfaction.

The importance of having a written contract with the vendor has been discussed in the literature. ~4 A thorough contract should cover reliability, performance, operating system compatibility, effectiveness, training, costs, and trouble shooting. Some studies cite maintenance, service, education, and documentation as the top concerns of ESS system users. Applications availability and reliability have been next highly rated with price being the most important criterion after that. is Therefore, such criterion variables have been incorporated into this model and are expected to positively correlate with overall satisfaction.

Some research reports that user support in terms of education and documentation seems to affect user satis- faction. Scannell ~6 has cited that users find software and support to be major problems. However, complaints that the computer industry does not provide adequate training, documentation, and manuals for users have been rebuffed by industry representatives) 7

Questions have been raised about the effectiveness and responsiveness of traditional system maintenance services) s'~9 The issue of centralization vs. decentralization concerning maintenance contracts have been addressed. 2° The authors of the present study explore a different facet, the impact of the effectiveness and responsiveness of main- tenance services on overall user satisfaction.

Another issue that may affect ESS user satisfaction is the method of acquisition. According to Kelly, 2~ the impact of buying or leasing may be substantial. Therefore, these criterion variables are evaluated in the present model.

ENGINEERING AND SCIENTIFIC SOFTWARE PRACTICE AND THEORY

Both practitioners and theorists have been struggling with various aspects of ESS systems. Some practitioners dealing with computers have raised perplexing questions. 22

Choosing an appropriate ESS system and recognizing its limitations have been among the most difficult and con- fusing tasks for the engineering profession. 23,24 A better understanding of the determinants of ESS user satisfaction could not only partially answer the above questions, but could also facilitate the task of choosing a system and recognizing its limitations.

Theorists have wrestled with the problems from a more scientific point of view. They have examined the demise of help for utility planning engineers, 2s evaluated computer graphics in central office engineering, 26 and investigated critical success factors in engineering firms. 27 Much like the present study, surveys of ESS techniques have been con- ducted, although they fell short of rigorous, empirical, and comprehensive coverage of ESS user satisfaction.

Meanwhile, strides have been made in developing deci- sion support (DSS) systems as they related to ESS. 2a How- ever, as in previous studies, the issue of user satisfaction has not been fully addressed. Therefore, the principal objective of this study is to supplement previous studies, and to integrate the issue of user satisfaction into the DSS model, while building upon prior theoretical work.

The above literature review sheds some light on the importance of different criterion variables, and their consideration in the DSS for ESS systems. System rating information could be a very useful tool to managers who are designing the acquisition of ESS systems, as well as to vendors, who must decide which systems to develop, market, and/or support.

Measurement of system ratings is quite complex and requires a selection of various criterion (independent) variables. It also requires an analysis of these variables to determine how they are related to one another. This paper describes the results of a system rating study in which the users were asked to respond to many questions. These questions (independent variables), based on the literature, are the primary determinants of overall user satisfaction (dependent variable).

The overall user satisfaction is related to these ESS variables with the use of multiple regression analysis. This analysis is the basis for the design of a decision support system for forecasting user satisfaction in a specific com- puter installation. The DSS can compare the current user satisfaction to industry standards, past levels of satisfaction and desirable future levels of user satisfaction.

SURVEY METHODOLOGY AND DATA COLLECTION

This survey was based on results received from question- naires mailed to a very carefully controlled nth sampling from randomly drawn subset of computer user lists. A total of 15 218 questionnaires were sent to computer users. The specific subsets were identified and qualified by a panel of experts. In an effort to improve the response rate, and thereby increase the statistical validity, the users were con- tacted twice; a first request was followed weeks later by a second request. The response rate was 32%, representing 4597 users, who responded to 4870 questionnaires (some users evaluated more than one computer model).

Judges invalidated 379 responses, including 179 users who rated two different COlnputers at the same time;

174 Adv. Eng. Software, 1985, Vol. 7. No. 4

Page 3: Preliminary studies on the development of a decision support system for evaluating engineering and scientific software

another 43 users rated more than two different systems simultaneously. Datapro I batched the remaining 4448 valid returns by vendor, model, users, and computer types [main- frames of plug compatible mainframe computers (maxis), minicomputers and small business computers (minis), and desk-top personal and microcomputers (micros)] as follows:

Maxis M i n i s Micros Total Users 1919 2192 337 4448 Computers 67 93 19 179 Vendors 10 28 17 55

Each questionnaire allowed the user to rate one system. The recipient was encouraged to reproduce the form if he/ she wished to rate more than one system. For each system the responses were averaged and recorded. Labels were used as initial validation vehicles and for identification and elimination of duplicate returns. Recipients were asked to summarize their experiences with the systems currently being used and to answer questions about them.

METHODSANDPROCEDURES

A total of 179 computer systems were represented in the survey. The present authors coded and stored the responses to 20 questions (variables) on the computer (see variable legend). The data were tested for validity and consistency. For example, the percentage values were checked for the range between 0 and 100. Non response bias was evaluated with an F-test and found to be insignificant.

The procedure of data collection and data based updates is done through an interactive on-line questionnaire (IOQ) which is a model for the DSS. This method of data collec- tion avoids the pitfalls of the traditional manual question- naires. Some of these pitfalls include: (1) incomplete, illegible responses, (2) non response and sampling bias, (3) low response rate, (4) long elapsed time from distribution to the analysis phase, (5) time consuming, error prone data transcription and key-punch operation, and (6) disruption, resentment and anxiety produced in the respondent and most importantly, (7) ambiguity in questionnaire items. This IOQ controls the above problems by validation pro- cedures, and most importantly, it clarifies ambiguities through help files. A respondent can enter a '?' instead of an answer, to obtain clarification concerning an ambiguous item.

RESULTS AND DISCUSSION

The predicted variable (overall satisfaction) is regressed over the criterion (independent) variables. This is done by a forward stepwise inclusion procedure, in a manner which provides considerable control over the inclusion of in- dependent variables in the regression equation) 9,a°

Table 1 presents the statistics used for the overall test for goodness of fit for the regression equation. This table shows the multiple R, R squared, the standard error, and an analysis of variance (ANOVA) for the regression model. This step was selected because each additional variable added to the model increased the multiple R of the model while having an overall F value statistically significant at the 0.01 level.

According to these test we can conclude that the sample R square of 0.818 indicates that 82% of the variation in overall satisfaction is explained by these independent variables, The standard error of the estimate at the 19th step is 4.22. This means that on the average, the predicted

Table 1. Multiple regression. Overall significance test for goodness of fit of the regression equation and analysis of variance (ANO VA)

Analysis of Sum of Mean Multiple R 0 .904 variance DF square square

R square 0 .818 Regression 19 12659.042 666.265 Adjusted R 0.796 Residual 159 2825.575 17.771

square Standard 4 .216 CriticalF 1.88 Calculated F 37.492*

error

*Overall significance at less than the 0.01 level.

Table 2. Variables in the equation. Significance test for specific coefficients in the regression

Step Std. error No.* Variable name B BETA RANKt of B F

1 Operation 0.298+0 0.260 1 0 .067 19.865 2 Trouble shoot 0.118+0 0.137 3 0 . 0 5 1 5.281 3 Computer 0.214+0 0.194 2 0 .056 14.494 4 Programming 0.125+0 0.128 4 0 .059 4.419 5 Peripherals 0.107+0 0.101 5 0 . 0 5 4 3.970 6 Compilers 0.633--1 0.084 7 0 . 0 3 4 3.472 7 Education 0.559--1 0.060 10 0 . 0 5 7 0.972 8 Expectations 0.441--1 0.075 9 0 . 0 2 3 3.652 9 Op. system 0.873--1 0.091 6 0 . 0 5 4 2.611

10 Mainframe 0.391--1 0.048 11 0 . 0 3 0 1.557 11 Effectiveness 0.740--1 0.077 8 0 . 0 5 2 2.055 12 Microcomputer 0.509-1 0.042 12 0 . 0 5 0 1.050 13 Minicomputer 0.222--1 0.027 17 0 . 0 3 2 0.479 14 Life inMos. 0.517--1 0.034 15 0 . 0 5 4 0.914 15 Rental 0.966--2 0.025 18 0 . 0 1 5 0.402 16 Applications 0.201--1 0.029 16 0 . 0 2 7 0.529 17 Conversion 0.277--1 0.035 13 0 . 0 4 0 0.481 18 Documentation 0.311--1 0.035 14 0 .051 0.374 19 Lease 0.385--2 0.006 19 0 . 0 2 5 0.024

* Ranked in descending order of contribution to the explained variance (R-square change - Table 4) t Ranked according to BETA, which indicates change in satisfaction due to one standard deviation change in the respective variable.

overall satisfaction will deviate from the actual scores by 4.22 units on the overall satisfaction scale.

The relative importance of each of the predictor or independent variables, on the predicted or dependent variable is described in Table 2. This relative importance is described by the BETA, the change in satisfaction, due to one standard deviation change in the predictor criterion variable value. The variables and their coefficients are the basis for the DSS model, used in an interactive on-line questionnaire (Appendix A).

The ranking of the independent variables affecting the overall satisfaction of ESS users reveals some interesting results. On the one hand, it appears that one standard deviation change in the ease of operation and the technical support trouble shooting (ranked 1 and 2) have the largest effects on the dependent variable. On the other hand, Lease From Third Party and Documentation have the smallest effects on the overall satisfaction with the ESS system.

The majority of variables have expanding or positive effects on the dependent variable. Interestingly, the nega- tive coefficient of mainframes is substantially smaller than minis. This may be explained by the greater user control over the microcomputer, thus the lesser aversion to them as compared to mainframes. 3x

Adv. Eng. Software, 1985, Vol. 7, No. 4 175

Page 4: Preliminary studies on the development of a decision support system for evaluating engineering and scientific software

Multiple regression is used as a stepwise procedure for adding independent variables into the equation. The vari- ables are included according to their R-square change. As can be seen from Table 2 the inclusion of all the variables except for user responses, systems represented, acquisition, and maintenance service have an influence on R square. Therefore, all these variables should be incorporated in the model.

Table 3 lists the variables, number of user responses, number of systems represented, purchase acquisition method, and responsiveness of maintenance service, as the only criterion variables that are excluded from the overall user satisfaction model. Their contr ibution to the overall multiple R was negligible.

Table 4 shows the significance test for specific coeffi- cients of the model. The coefficients in Table 4 show the R square (RSQ) change due to each variable in the model. RSQ determines the inclusion sequence.

Variable legend

1. Average life of computer systems in months (Life in Mos) 2. Rented from the manufacturer of the computer (Rental) 3. Leased from a third party (Lease) 4. Micro computer based (ESS systems) (Microcomputer) 5. Mini computer based (ESS systems) (Minicomputer) 6. Mainframe computer based (ESS systems) (Mainframe) 7. Ease of operation (Operation) 8. Reliability of the computer (Computer) 9. Reliability of peripherals (Peripherals)

10. Maintenance service effectiveness (Effective) 11. Technical support-trouble shooting (Trouble-shoot) 12. Technical support education (Education) 13. Technical support documentation (Documentation) 14. Manufacturer's software operating system (Op. system) 15. Compilers and assemblers (Compilers) 16. Applications programs (Applications) 17. Ease of programming (Programming) 18. Ease of conversion (Conversion) 19. Systems meeting user expectations (Expectations) 20. Overall system satisfaction (Satisfaction)

DECISION SUPPORT SYSTEM FOR ENGINEERING AND SCIENTIFIC SOFTWARE DIAGNOSTICS

Traditionally, experts have been using survey questionnaires to evaluate ESS. Such a questionnaire would usually be administered manually through an interview. These manual software evaluation methods have numerous disadvantages.

In contrast, an on-line interactive data collection offers many advantages. Most importantly, selective clarifications are provided by help files and immediate feedback becomes

Table 3.

Variable Beta in Partial Tolerance F

No. user responses --0.001 --0.002 0.855 0.001 No. systems represented 0.003 0.007 0.872 0.007 Purch, acquis, method 0.012 0.008 0.078 0.010 Responsiveness of --0.002 --0.003 0.268 0.001

maintenance service

plausible. In fact, a computerized expert system analyzes the data immediately after it has been entered, providing immediate feedback and diagnostics to the user.

The DSS interactively interrogates the user about the system (Appendix A). User responses are underlined and recorded anonymously in a data.base. Subsequently, the DSS generates the diagnostics audit trail (Appendix B). This report trails after the interactive questionnaire, providing immediate feedback. Later, it may also be used by an internal or external auditor, manager, or user for system development. It compares the user's installation to industry standards, based on the frequently updated data-base information.

This ESS diagnostics audit trail sorts the report items in ascending order o f the current deviates, which reflect the relative weaknesses (--) or strengths (+) of this installation relative to the industry. It generates a current user satisfac- tion score, i.e. 20.81, and computes the gain or loss in over- all satisfaction.

Table 4. Summary table

Variable Multiple R square RSQ* change Simpl. R B BETA

Operation 0.721 0.521 0.52112 0.721 0.298+0 0.260 Trouble-shoot 0.828 0.686 0.16510 0.628 0.118+0 0.137 Computer 0.868 0.753 0.06744 0.677 0.214+0 0.194 Programming 0.882 0.777 0.02341 0.690 0.125 + 0 0.128 Peripherals 0.886 0.785 0.00851 0.569 0.107+0 0.101 Compilers 0.890 0.792 0.00695 0.495 0.633+ 1 0.084 Education 0.896 0.803 0.00552 0.529 0.558--1 0.060 Expectations 0.893 0.797 0.00542 0.361 0.440--1 0.075 Op. system 0.898 0.807 0.00358 0.666 0.873--1 0.091 Mainframe 0.899 0.809 0.00220 0.047 0.371--1 --0.048 Effectiveness 0.900 0.811 0.00170 0.561 0.740--1 --0.077 Microcomputer 0.901 0.813 0.00164 --0.050 0.509--1 0.042 Minicomputer 0.902 0.814 0.00162 0.046 0.222--1 0.027 Life in Mos. 0.903 0.815 0.00106 --0.164 --0.517--1 --0.034 Over priced 0.904 0.818 0.00065 --0.312 --0.318-- 1 --0.034 Rental 0.904 0.816 0.00062 --0.100 --0.966--2 -0.025 Applications 0.904 0.817 0.00060 0.423 0.200-- 1 0.027 Conversion 0.903 0.816 0.00056 0.597 0.277--1 0.035 Documentation 0.904 0.817 0.00043 0.530 0.311-- 1 0.035 No. systems 0.905 0.819 0.00013 0.102 0.14991 0.079 No. users 0.905 0.819 0.00007 0.099 --0.146--1 --0.069 Responsive 0.905 0.819 0.00006 0.494 --0.152-- 1 --0.017 Lease 0.904 0.818 0.00003 0.066 --0.385--2 - 0.006 (C onst ant) --0.281 + 2

*Primary key for forward step-wise inclusion of criteria variables.

176 Adv. Eng. Software, 1985, I1oi. 7, No. 4

Page 5: Preliminary studies on the development of a decision support system for evaluating engineering and scientific software

The DSS decomposes the change in overall satisfaction, and it identifies the sources of the change. Based upon that, it also generates prioritized recommendations for further improvements (Appendix C). The responses of the user, along with the diagnostics audit trail are stored in a trans- action file, and eventually merged with the old data-base master file to form the updated master file.

SUMMARY

In summary, the multiple regression has been used to study the dependence of overall satisfaction of an ESS system with many ESS variables. The overall significance tests of the goodness of fit of the model have been conducted. The multiple correlation coefficient was 0.904 thus the null hypothesis that the correlation coefficient was zero was rejected. A sample run of the DSS was illustrated together with an audit trail report and prioritized recommendations.

In conclusion, many independent variables had regres- sion coefficients which were significantly different from zero. The variables were rank ordered according to their BETA values. Ease of operation was ranked the single most important factor for determining satisfaction. Other variables which contributed overwhelmingly were as follows: trouble shooting, computer reliability, ease of programming, and reliability of peripherals. ESS application invariably have a negative effect on satisfaction. However, this effect diminishes as the applications are down loaded from main- frames into minis and micros. This may indicate that more attention should be devoted to ESS user satisfaction, especially for the mainframe computers.

It appears that the satisfaction depends on ease of opera- tion and trouble shooting, while whether a computer is leased from a third party or whether the technical support documentation is adequate had a minimal effect on overall satisfaction. The variable, acquisition method, did not con- tribute to any major extent to the overall satisfaction of the computer system. Therefore, it was completely excluded from the model.

The implications of the present study are many. The overall satisfaction of system users can be measured by answering certain questions and these results can be very useful to system users as well as buyers and vendors. ESS system buyers can compare different variables and thus can calculate the overall satisfaction they would derive by buying the system. The vendors and designers can build ESS systems based on the criteria which are important to users. Thus, they will maximize user satisfaction and eventually increase their sales.

ESS vendors of the computer systems can determine the variables which would increase the overall satisfaction of their products. Thus, they would be more likely to incorporate some of these features in their systems. This could lead to better ESS systems as well as increase research and development. Vendors could also use these data as a marketing tool for their products. If their ESS systems have the features which were highly ranked, they could advertise them and attract additional customers. These kinds of studies could promote vendors who are concerned with user satisfaction, and provide them an advantage over the competition.

Most importantly, this DSS for ESS provides the buyer or user with an effective tool for system selection and upgrade. Buyers can evaluate potential ESS systems based on their user satisfaction scores, and eventually choose a system that will yield the highest satisfaction compared to

other systems. Current users can evaluate the satisfaction at their installation and compare it to market standards, identifying weaknesses and strengths. Moreover, they can apply remedial action to improve their satisfaction and gauge their progress by running the DSS on a regular basis.

IMPLICATIONS AND CONCLUSION

This report is a preliminary study based upon random samples reflecting cross-sections of computer users in order to determine a comparison of the ideals and qualities that users expect out of their computer hardware and soft- ware configurations to industry standards.

The results imply a preference by users towards ease of operation, technical support-trouble-shooting, and com- puter reliability. These seem logical as the most desired attributed in a configuration or micro-system. If one of the aforementioned qualities were lacking, there would exist diseconomies such as downtime and lack of useful informa- tion for decision making by management.

This information is of value not only to users, but to marketing executives of computer companies and prospec- tive user desires in a computer and the complement is also t r u e - the consumer or user can detect what qualities should be in a product plus the reliability of the vendor.

REFERENCES 1 Datapro Research Corporation, User Ratings of Computer

Systems, 1984 2 Bflbrev. C. P. and House. W. C. Mini-computer selection,

Z Systems Management, 1981, July, 36-39 3 Tumey, F. B. aria Laitala, P. H. A strategy for computer

selection by small companies, Managerial Planning 1976, November, 24-29

4 Grueberger, F. Making friends with user-friendly, Datamation 1981, January, 108

5 Anderson, R. E. Consumer dissatisfaction: the effect of disconfirmed expectancy on perceived product performance, J. Marketing Research 1982, 10, 38

6 Olshavsky, R. W. and Miller, J. A. Consumer expectations, products performance, and perceived product quality, Z Marketing Research 1972, 9, 12

7 Olson, J. C. and Dover, P. Disconfirmation of consumer ex- pectations through product trial, J. Applied Psychology 1979, 64, 179

8 Swan, J. and Martin, W. S. Testing comparison level and pre- dictive expectations models of satisfaction, Advances in Consumer Research 1981, 8, 77

9 Swan, J. E. and Trawiek, I. F. Disconfirmation of expectations and satisfaction with a retail service, J. Retailing 1981,57, 49

10 Sample, R. L. Minis-moving beyond the small business user, Administrative Management 1981 September, 58

11 Barcus, S. W. and Boer, G. B. How a small company evaluates acquisition of a mini-computer, Management Accounting 1981, March, 13

12 Cheney, P. H. Selecting, acquiring and coping with your first computer, Z Small Business Management 1979, January, 43

13 Farmer, D. F. Comparing the 4341 and M80.42, Computer- world 1981, February, 9

14 Brandon, D. Staying out of court, Mini-Micro-Systems 1980, February, 127

15 Rosenfeld, K. E. Small users value maintenance, ease of use, Computerworld 1980, August, 3

16 ScanneU, T. IPL, Singer rated tops in mainframe survey/users find software, support, problem areas, Computerworld 1982, 7 June, 56

17 Lean, E. Stem, R. and Monds, T. Byting back: the industry responds/computers made easy, Training and Development Journal 1983,37 (5), 13

18 Allerton, J. L. Controlling maintenance costs via a computer, Cost & Management 1983, December, 45

19 Howard, D. What to look for when buying computer main- tenance, Canadian Datasystems 1983, 15 (11), 34

Adv. Eng. Software, 1985, 1Iol. 7, No. 4 177

Page 6: Preliminary studies on the development of a decision support system for evaluating engineering and scientific software

20 Linzey, H. T, Computerized centralized maintenance: an independent's view, Telephony 1983, 205 (22), 150

21 Kelly, N. The impact of buying or leasing, Infosystems 1980, May, 78

22 Bresnen, E. J. Desk top computer applications for industrial engineers, Management Journal of Methods 7Yme Measurement, 10 (31983), 17

23 Kull, D. Personal computers let engineers retire their old tools, Computer Decisions 1984, January, 136

24 Shoot, R. Panel socks human-engineered software tools, Computerworld 1980, 2 June, 36

25 Anonymous, Interactive computing helps ability planning engineers, Public A bilities Fortnightly 1981, 12 March, 61

26 Smith, H. and Straud, D. The hole of computer graphics is central office engineering, GTE Automatic Electric Worldwide Communications Journal 1983, November/December, 169

27 Scott. K. L. Critical success factors in architectural-engineering firms, Today's Executive 1983, Autumn, 8

28 Green, Richard, One company's answer: computer assisted engineering, Interlace Use 1983, August, 50

29 Theil, 1970 30 Nie, H. N., Hull, C. H., Jenkins, J. F., Steinbrenner, K. and

Bentl, D. H. Statistical Package for the Social Sciences, McGraw-Hill, New York

31 Bates, A. Choosing a micro? Here's a cautionary tale, Accoun- tancy 1982, February, 98

BIBLIOGRAPHY

Holmes, J. R. Microcomputer limitations, J. Accountancy 1979, December, 48

Kerlinger, F. N. and Pedhazer, E. Multiple Regression in Behavioural Research, New York, Holt, Rinehart and Winston, 1973

Kleinrock, L. A decade of Network development, J. Telecommuni- cation Networks, 1982, 1 (5), 1

APPENDIX A

WELCORE TO THE INTERACTIVE ONLINE DECISION SUPPORT SYSTEM FOR ENGINEERING AND SCIENTIFIC SOFTIARE (ESS) DIAGNOSTIC PROGRAM AS OF ~/12/85

The objective of this questionnaire is to identify and quantify the" weaknesses and the strengths of your ESS system. I t will print-out a priorltlzed deviations l i s t of this installation from industry stan- dards and diagnostics AUDIT-TRAIL recommendations.

Your computer vendor is: IBM Model: PC XT Others (Y/N): N

ENTER YOUR RESPONSES CONCERNING EXCLUSIVELY ESS ON THE IBM PC XT: (Enter a "?" when additional information is needed.)

I. Number of computer users sharing this ESS system? 12 2. Number of computer systems at your site? 3. Average l l fe of these computer systems in months? ~-5

PERCENTAGE OF ESS SYSTEM (EXCLUDING NON ESS) THAT ARE: 1, Rented from the manufacturer of the computer? 15~ 2. Leased f r o m a third party? 3. Micro computer based ESS systems you are using? I0~ 4. Mini computer based (if > 0 will rerun for minis)? 00~ 5. Mainframe computer based ( i f > 0 will r e r u n Maxis )? 00%

CONCERNING MICROCOMPUTERS (EXCLUDING MINIS AND MAINFRAMES) PLEASE RATE I~ FOR VERY BAD...THRU...IO0~ FOR VERY GOOD 6 . Ease of operation? 62% 7 . Reliability of the computer? 72~ 8 . Reliability of pe r i phe ra l s? 6 0 %

9 . Maintenance s e r v i c e effectiveness? 10. Tccnlcal support t rouPe-shoot ing? 5 - ~ 11. T e c h n l c a l s u p p o r t edu~'ation? ~9~ 12. Technical support documentation? 5~ 13. Manu fac tu re r ' s software opera t ing system? 14. Compi lers and assemblers? 52% 15. A p p l i c a t i o n s programs? 16. Ease o f programming? 17, Ease o f convers ion? 60% t8 . Systems meeting user expectations?

Please enter a d d i t i o n a l comments, and press two c a r r i a g e r e t u r n s :

I need helpr the e d u c a t i o n / t r a l n i n 9 i s inadequate. Learn in~ on my own i s verv f r u s t r a t l n 9 and I wish I could a t tend some workshops!

Would you l i k e an ESS P r i n t o u t , D isp lay , or Both (P /S/B)? P O.K., is your printer online and ready to p r i n t (Y/N)?

* * * * THE ESS DIAGNOSTICS A U D I T TRAIL IS NOW BEING PRINTED * * * *

NOTE: User entrles are dlsplayed as underllned.

APPENDIX B

DECISION SUPPORT SYSTEM FOR ESS DIAGNOSTIC AUDIT TRAIL FOR IBM PC IT**

6/30/86 INDUSTRY STANDARDS USER INSTALLATION B C D=B*C E F=B*E G=F-O H

COLUMN: A B- AVER- AVG RES- ESS CURRENT PAST VARIABLE VAL AGE SCORE PONSE SCORE DEVIATE DEVIATE NAME * * * * * * * * * * * * * * * * * * * * * * * * *

R e l i a b i l i t y . 2 9 8 8 5 2 5 . 3 3 0 6 2 1 8 . 4 7 6

C o m p u t e r . 2 1 4 8 6 1 8 . 4 0 4 7 2 1 5 . 4 0 8

Op. System .087 81 7.O47 55 4.785 Trouble Shoot .118 67 7.906 50 5.900 Programming .125 79 9.875 65 8.125 Pe r iphe ra l s .107 76 8.132 60 6.~20 C o m p i l e r . 0 6 3 78 4.914 52 ) . 2 7 6

Education .056 6~ 3,584 ~9 2,744 E f f e c t i v e n e s s .074 74 5 . 6 7 6 6 3 4 . 6 6 Z

Expec ta t ions .046 85 3.740 71 3.124 Applications .020 69 1,380 48 .960 Conversion .028 73 2.04/+ 60 1.680 Documentation .031 63 1,953 53 1.643 Life In Mos. .052 ~ .208 2 .104 Micro Computer .551 2 .102 1 .051 Minl Computer .022 6 .132 5 .110 Lease .006 13 .052 9 .036 Mainframe - .037 5 - .185 # - .168 Rental - .010 22 - .220 15 - .150

Constant -21.800 -21.800 -21.800 TOTAL 78.074 55.406

Industry Standard from the Data Base User Installation Overall Satisfaction Score

-6 .854 -18.21 -2 .996 -10.22 -2 .262 -11 .64 -2 .006 - 3.13 -1 .750 - 4.12 -1 .712 - 6.74 -1 .638 - 6.55

• 8 ~ 0 - 3.19 .814 - 2.13 .616 - 1.95 .420 - 1 ,19 .364 .98 .310 . 9 0

.10~ . 5 9

.051 .62

.022 , 3 9

.016 .79

.037 .02

. 0 7 0 . 0 3

22.668 -56.38

7 8 . 0 7 4 - 7 8 . 0 7

5 5 . 4 0 6

-20.98 Less User Past Satisfaetlon Score from 12/)0/83 -20,981 Satisfaction Score Gain Since 12/30/83 34,425

APPENDIX C

*************** w*******************+.

ESS DIAGNOSTICS AUDIT TRAIL RECOMMENDATIONS AS OF g/12/85

* * * * * * * * * * * * * * * * * * * * * * * * . . . . . . . . . . . * .

RANK PRIORITY DESCRIPTION AND SUGGESTED REMEDIAL ACTION (IBM PC IT)

I. Operation should be facilitated through the use of "hands-on" workshops and the use of computer-asslsted instructions (CAI).

2. Computer rel labi l i ty should be raised by establishing a maintenance contract and blweekly overnight execution of the hardware dlagnostic program to identify and treat problem areas,

3. Operating system should be updated and patched more frequently,

4. Trouble-shooting should be Improved through the development of on-line help f l les and teehnlcal consultlng staff.

5, Programming should be structured, documented and optimized.

6. Peripherals should be examined and maintained more frequently.

7. Compilers and assemblers should be used more frequently to speed up execution and reduce unauthorized software modification.

8. In-house education and training should be developed and used as a regular workload (at least 2 hours weekly).

9. Effectiveness of maintenance should be monitored carefully.

10. Expectations should be brought down to a more realistic level, by quallty circle discussions and staff meetings.

11. Application software should be previewed prior to purchase.

12. Conversion efforts should be well planned.

13. Documentation should be placed on-llne and used interactlvely.

14. Average system life I n months should be estimated.

CLOSING REMARK BY ESAI:

Keep up the good work folks!!!] Since 12/30/83, there have been major Improvements in virtually every area, especially in the area of rel iabi l l ty.

178 Adv. Eng. Software, 1985, Vol. 7, No. 4