measuring service quality

16
Service Quality: A Measure of Information Systems Effectiveness Author(s): Leyland F. Pitt, Richard T. Watson, C. Bruce Kavan Source: MIS Quarterly, Vol. 19, No. 2 (Jun., 1995), pp. 173-187 Published by: Management Information Systems Research Center, University of Minnesota Stable URL: http://www.jstor.org/stable/249687 . Accessed: 28/02/2011 06:44 Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at . http://www.jstor.org/action/showPublisher?publisherCode=misrc. . Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. Management Information Systems Research Center, University of Minnesota is collaborating with JSTOR to digitize, preserve and extend access to MIS Quarterly. http://www.jstor.org

Upload: gisela-scott

Post on 30-Nov-2014

57 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Measuring Service Quality

Service Quality: A Measure of Information Systems EffectivenessAuthor(s): Leyland F. Pitt, Richard T. Watson, C. Bruce KavanSource: MIS Quarterly, Vol. 19, No. 2 (Jun., 1995), pp. 173-187Published by: Management Information Systems Research Center, University of MinnesotaStable URL: http://www.jstor.org/stable/249687 .Accessed: 28/02/2011 06:44

Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unlessyou have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and youmay use content in the JSTOR archive only for your personal, non-commercial use.

Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at .http://www.jstor.org/action/showPublisher?publisherCode=misrc. .

Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printedpage of such transmission.

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact [email protected].

Management Information Systems Research Center, University of Minnesota is collaborating with JSTOR todigitize, preserve and extend access to MIS Quarterly.

http://www.jstor.org

Page 2: Measuring Service Quality

IS Service Quality-Measurement

Service Quality: A Measure of Information Systems Effectiveness

By: Leyland F. Pitt Henley Management College Greenlands Henley on Thames, RG9 3AU Oxfordshire United Kingdom [email protected]

Richard T. Watson Department of Management University of Georgia Athens, GA 30602-6256 U.S.A. [email protected]

C. Bruce Kavan Department of Management,

Marketing, and Logistics University of North Florida 4567 St. John's Bluff Road Jacksonville, FL 32216 U.S.A. [email protected]

Abstract The IS function now includes a significant ser- vice component. However, commonly used mea- sures of IS effectiveness focus on the products, rather than the services, of the IS function. Thus, there is the danger that IS researchers will mismeasure IS effectiveness if they do not include in their assessment package a measure of IS service quality. SERVQUAL, an instrument developed by the marketing area, is offered as a possible measure of IS service quality. SERVQUAL measures service dimensions of tan- gibles, reliability, responsiveness, assurance, and empathy. The suitability of SERVQUAL

was assessed in three different types of organi- zations in three countries. After examination of content validity, reliability, convergent validity, nomological validity, and discriminant validity, the study concludes that SERVQUAL is an appropriate instrument for researchers seeking a measure of IS service quality.

Keywords: IS management, service quality, measurement

ISRL Categories: A104, E10206.03, GA03, GB02, GB07

Introduction The role of the IS department within the organi- zation has broadened considerably over the last decade. Once primarily a developer and opera- tor of information systems, the IS department now has a much broader role. The introduction of personal computers results in more users of information technology interacting with the IS department more often. Users expect the IS de- partment to assist them with a myriad of tasks, such as hardware and software selection, instal- lation, problem resolution, connection to LANs, systems development, and software education. Facilities such as the information center and help desk reflect this enhanced responsibility. IS departments now provide a wider range of serv- ices to their users.1 They have expanded their roles from product developers and operations managers to become service providers. IS departments have always had a service role because they assist users in converting data into information. Users frequently seek reports that sort, summarize, and present data in a form that is meaningful for decision making. The con- version of data to information has the typical characteristics of a service. It is often a custom- ized, personal interaction with a user. However, service rarely appears in the vocabulary of the traditional systems development life cycle. IS textbooks (e.g., Alter, 1992; Laudon and Laudon, 1991) describe the final phase of systems de- velopment as maintenance. They state

1 We prefer "client" to the more common "user"-it draws more attention to the service-mission aspect of IS departments. However, because "user" is so ingrained in the IS community, we have stuck with it.

MIS Quarterly/June 1995 173

Page 3: Measuring Service Quality

IS Service Quality-Measurement

that the key deliverables include operations manual, usage statistics, enhancement re- quests, and bug-fix requests. The notion that IS departments are service providers is not well-established in the IS literature.

A recent review identifies six measures of IS success (Delone and McLean, 1992). In recog- nition of the expanded role of the IS department, we have augmented this list to include service quality as a measure of IS success. This paper discusses the appropriateness of SERVQUAL to assess IS service quality. The instrument was originally developed by marketing academics to assess service quality in general.

Measuring IS Effectiveness Organizations typically have many stakeholders with multiple and conflicting objectives of vary- ing time horizons (Cameron and Whetton, 1983; Hannan and Freeman, 1977; Quinn and Rohrbaugh, 1983). There is rarely a single, common objective for all stakeholders. Thus, measuring the success of most organizational endeavors is problematic. IS is not immune to this problem. IS effectiveness is a multidimen- sional construct, and there is no single, over- arching measure of IS success (DeLone and McLean, 1992). Consequently, multiple meas- ures are required, and DeLone and McLean identify six categories into which these meas- ures can be grouped, namely system quality, in- formation quality, use, user satisfaction, individual impact, and organizational impact. The categories are linked to define an IS suc- cess model (see Figure 1).

The basis of the DeLone and McLean categori- zation-Shannon and Weaver's (1949) theory of communication-is product oriented. For ex- ample, system quality describes measures of the information processing system. Most meas- ures in this category tap engineering-oriented performance characteristics (DeLone and McLean, 1992). Information quality represents measures of information systems output. Typi- cal measures in this area include accuracy, pre- cision, currency, timeliness, and reliability of information provided. In an earlier study, also based on the theory of communication, these

categories are labeled as production and prod- uct, respectively (Mason, 1978). Since system quality and information quality precede other measures of IS success, existing measures seem strongly product focused. This is not sur- prising given that many of the studies providing the empirical basis for the categorization are based on data collected in the early 1980s, when IS was still in the mainframe era.

As the introduction pointed out, the IS depart- ment is not just a provider of products. It is also a service provider. Indeed, this may be its major function. This recognition has been apparent for some years: The quality of the IS department's service, as perceived by its users, is a key indi- cator of IS success (Moad, 1989; Rockart, 1982). The principle reason IS departments measure user satisfaction is to improve the quality of service they provide (Conrath and Mignen, 1990). A significantly broader perspec- tive claims that quality and service are key measures of white-collar productivity (Deming, 1981-82). For Deming, quality is a super ordi- nate goal.

Product and system suppliers are in the service business, albeit less so than service firms. Virtu- ally all tangible products have intangible attrib- utes, and all services possess some properties of tangibility. There is a perspective in the mar- keting literature (Berry and Parasuraman, 1991, p.9; Shostack, 1977) that argues the inappropri- ateness of a rigid goods-services classification. Shostack (1977) asserts that when a customer buys a tangible product (e.g., a car) he or she also buys a service (e.g., transportation). In many cases, a product is only a means of ac- cessing a service. Personal computer users do not just want a machine; rather they seek a sys- tem that satisfies their personal computing needs. Thus, the IS department's ability to sup- ply installation assistance, product knowledge, software training and support, and online help is a factor that will have an impact on the relation- ship between IS and users.

Goods and services are not neatly compartmen- talized. They exist along a spectrum of tangibil- ity, ranging from relatively pure products, to relatively pure services, with a hybrid some- where near the center point. Current IS success measures, product and system quality, focus on the tangible end of the spectrum. We argue that

174 MIS Quarterly/June 1995

Page 4: Measuring Service Quality

IS Service Quality-Measurement

Figure 1. Augmented IS Success Model (Adapted from DeLone and McLean, 1992)

service quality, the other end of the spectrum, needs to be considered as an additional meas- ure of IS success. DeLone and McLean's model needs to be augmented to reflect the IS depart- ment's service role. As the revised model shows (see Figure 1), service quality affects both use and user satisfaction.

There are two possible units of analysis for IS service quality-the IS department and a par- ticular information system. In those cases where users predominantly interact with one system (e.g., sales personnel taking telephone orders), a user's impression of service quality is based almost exclusively on one system. In this case, the unit of analysis is the information system. In situations where users have multiple interac- tions with the IS department (e.g., a personnel manager using a human resources manage- ment system, electronic mail, and a variety of personal computer products), the unit of analy- sis can be a particular system or the IS depart- ment. For the user of multiple systems, however, this distinction may be irrelevant. Thus, for example, the user who finds it difficult to get a personal computer repaired is con- cerned not with poor service for a designated system but with poor service in general from the IS department. Thus, while system quality and information quality may be closely associated with a particular software product, this is not al- ways the case with service quality. Irrespective of whether a user interacts with one or multiple

information systems, the quality of service can influence use and user satisfaction.

In summary, multiple instruments are required to assess IS effectiveness. The current selection ignores service quality, an increasingly impor- tant facet of the IS function. If IS researchers disregard service quality, they may gain an inac- curate reading of overall IS effectiveness. We propose that service quality should be included in the researcher's armory of measures of IS effectiveness.

Measuring Service Quality Service quality is the most researched area of services marketing (Fisk, et al., 1993). The con- cept was investigated in an extensive series of focus group interviews conducted by Paras- uraman, et al. (1985). They conclude that serv- ice quality is founded on a comparison between what the customer feels should be offered and what is provided. Other marketing researchers (Gronroos, 1982; Sasser, et al., 1978) also sup- port the notion that service quality is the discrep- ancy between customers' perceptions and expec- tations. There is support for this argument in the IS literature. Conrath and Mignen (1990) report that the second most important component of user satisfaction, after general quality of service, is the match between users' expectations and

MIS Quarterly/June 1995 175

Page 5: Measuring Service Quality

IS Service Quality-Measurement

actual IS service. Rushinek and Rushinek (1986) conclude that fulfilled user expectations have a strong effect on overall satisfaction.

Users' expressions of what they want are re- vealed by their expectations and their percep- tions of what they think they are getting. Parasuraman and his colleagues (Paras- uraman, et al., 1985; 1988; 1991; Zeithaml, et al., 1990) suggest that service quality can be as- sessed by measuring customers' expectations and perceptions of performance levels for a range of service attributes. Then the difference between expectations and perceptions of actual performance can be calculated and averaged across attributes. As a result, the gap between

expectations and perceptions can be measured.

The prime determinants of expected service

quality, as suggested by Zeithaml, et al. (1990), are word-of-mouth communications, personal needs, past experiences, and communications

by the service provider to the user. Users talk to each other and exchange stories about their re- lationship with the IS department. These conver- sations are a factor in fashioning users'

expectations of IS service. Users' personal needs influence their expectation of IS service. A manager's need for urgency may differ de-

pending on whether he or she has a PC failure the day before an annual presentation, or simply wants a new piece of software installed. Of course, prior experience is a key molder of ex- pectations. Users may adjust or raise their

User

Word-of mouth communications

Vendors

Vendor communications

Persona needs

I

expectations based on previous service encoun- ters. For instance, users who find that the help desk frequently solves their problems are likely to expect answers to future problems. The three factors just discussed all relate to expectations that originate with the user.

Another important source of expectations is the IS department itself, as the service provider. Its communications influence expectations. In par- ticular, IS can be a very powerful shaper of ex- pectations during systems development. Users are reliant on IS to convert their needs into a system. In the process, IS creates an expecta- tion as to what the finished system will do and how it will appear. It would seem that too fre- quently IS misinterprets users' requirements or gives users a false impression of the outcome because many systems fail to meet users' ex- pectations. For example, Laudon and Laudon (1991) report studies that show IS failure rates of 35 percent and 75 percent.

In the IS sphere at least, users are subject to another source of communication not appearing in Zeithaml, et al.'s (1990) model. Vendors often influence users' expectations. Users may read vendors' advertisements, attend presentations, or even receive sales calls. Vendors, in trying to sell their products, often raise expectations by parading the positive features of their wares and

downplaying issues such as systems conversion, compatibility, or integration with existing systems. IS has no control over vendors' communications

IS department

al Past experiences

I

Expected service

IS communications

t Gap |

I Perceived

service

Figure 2. Determinants of Users' Expectations

176 MIS Quarterly/June 1995

I I

Page 6: Measuring Service Quality

IS Service Quality-Measurement

and must recognize that they are an ever pre- sent force shaping expectations. This is not nec-

essarily bad. Vendors' communications can be a positive force for change when they make us- ers aware of what they should expect from IS.

The forces influencing users' expectations are shown in Figure 2. The difference between ex- pected service and IS's perceived service is de-

picted as a gap-the discrepancy between what users expect and what they think they are get- ting. If IS is to address this disparity it needs some way of assessing users' expectations and perceptions, and measuring the gap.

Parasuraman, et al. (1988) operationalized their conceptual model of service quality by following the framework of Churchill (1979) for developing measures of marketing constructs. They started by making extensive use of focus groups, who identified 10 potentially overlapping dimensions of service quality. These dimensions were used to generate 97 items. Each item was then turned into two statements-one to measure expectations and one to measure perceptions. A sample of service users was asked to rate each item on a seven-point scale anchored on strongly disagree (1) and strongly agree (7). This initial questionnaire was used to assess the service-quality perceptions of customers who had recently used the services of five different types of service organizations (essentially fol- lowing Lovelock's (1983) classification). The 97- item instrument was then purified and condensed by first focusing on the questions clearly discriminating between respondents hav- ing different perceptions, and second, by focus- ing on the dimensionality of the scale and establishing the reliability of its components.

Parasuraman, et al.'s work resulted in a 45-item instrument, SERVQUAL, for assessing cus- tomer expectations and perceptions of service quality in service and retailing organizations. The instrument has three parts (see the Appen- dix). The first part consists of 22 questions for measuring expectations. Questions are framed in terms of the performance of an excellent provider of the service being studied. The sec- ond part consists of 22 questions for measuring perceptions. Questions are framed in terms of the performance of the actual service provider.

The final part is a single question to assess overall service quality. Underlying the 22 items are five dimensions that the authors claim are used by customers when evaluating service quality, regardless of the type of service. These dimensions are:

* Tangibles: Physical facilities, equipment, and appearance of personnel.

* Reliability: Ability to perform the promised ser- vice dependably and accurately.

* Responsiveness: Willingness to help customers and provide prompt service.

* Assurance: Knowledge and courtesy of employees and their ability to inspire trust and confidence.

* Empathy: Caring, individualized attention the service provider gives its customers.

Service quality for each dimension is captured by a difference score G (representing perceived quality for that item), where

G=P-E

and P and E are the average ratings of a dimen- sion's corresponding perception and expecta- tion statements respectively.

Because service quality is a significant topic in marketing, SERVQUAL has been subject to considerable debate (e.g., Brown, et al., 1993; Parasuraman, et al., 1993) regarding its dimen- sionality and the wording of items (Fisk, et al., 1993). Nevertheless, after examining seven studies, Fisk, et al. conclude that researchers generally agree that the instrument is a good predictor of overall service quality.

Parasuraman, et al. (1991) argue that SERVQUAL's items measure the core criteria of service qual- ity. They assert that it transcends specific func- tions, companies, and industries. They suggest that context-specific items may be used to supple- ment the measurement of the core criteria. In this case, we found some slight rewording of one item was required to measure IS service quality. The first question was originally asked in terms of "up-to-date equipment." We changed the wording to "up-to-date hardware and software" because equipment could be perceived as refer- ring only to hardware.

MIS Quarterly/June 1995 177

Page 7: Measuring Service Quality

IS Service Quality-Measurement

Assessment of SERVQUAL's Validity Before SERVQUAL can be used as a measure of IS effectiveness, it is necessary to assess its validity in an IS setting. Data for validity assess- ment were collected in three organizations-a large South African financial institution (n=237), a large British accounting information manage- ment consulting firm (n=181), and a U.S. infor- mation services business (n=267). In each case, the standard SERVQUAL questionnaire was ad- ministered, with minor appropriate changes, to internal computer users in all firms (see the Ap- pendix). Respondents were also required to give an overall rating of the IS department's service quality on a seven-point scale, ranging from 1 through 7 (1 = very poor, 7 = excellent).

In the financial institution, the respondents were users of online production systems. Potential re- spondents were identified from a security profile list of all people having access to production systems. A total of 494 questionnaires were dis- tributed to personnel in the head office and six branches of the financial institution. The response rate was 48 percent (237 usable questionnaires).

In the accounting and consulting firm, respon- dents were internal users of the information sys- tems department throughout the organization. The questionnaires were dispatched to 500 users by means of the internal mail system, and

181 usable responses were received by the cut- off date, for a response rate of 36.2 percent.

In the information services business, respon- dents were internal users of the information sys- tems department. The questionnaires were dispatched by means of the internal mail sys- tem, and 267 usable responses were received by the cutoff date. The response rate was 68 percent.

Parasuraman, et al.'s (1988) construct validity appraisal of SERVQUAL guided the assess- ment of the validity of SERVQUAL for measur- ing IS service quality. This section discusses content validity, reliability, convergent validity, and nomological and discriminant validity.

Content validity Content validity refers to the extent to which an instrument covers the range of meanings in- cluded in the concept (Babbie, 1992; p. 133). Parasuraman, et al. (1988) used focus groups to determine the dimensions of service quality and then a two-stage process was used to re- fine the instrument. Their thoroughness sug- gests that SERVQUAL does measure the concept of service quality. We could not discern

any unique features of IS that make the dimen- sions underlying SERVQUAL (tangibles, reliabil-

ity, responsiveness, assurance, and empathy) inappropriate for measuring IS service quality or

Table 1. Reliability of the SERVQUAL by Dimension

Average Reliability Coefficient Across

Four Service Reliability Coefficients Companies (source:

Financial Consulting Information Parasuraman, et al., Dimension Number of Items Institution Firm Service 1988)

Sample size 237 181 267 Tangibles 4 0.62 0.65 0.73 0.61 Reliability 5 0.87 0.86 0.94 0.79 Responsiveness 4 0.66 0.82 0.96 0.72 Assurance 4 0.79 0.83 0.91 0.84 Empathy 5 0.75 0.82 0.90 0.75

Realiability of 0.90 0.94 0.96 0.89 linear combination

178 MIS Quarterly/June 1995

Page 8: Measuring Service Quality

IS Service Quality-Measurement

excluding some meaning of service quality in the IS domain.

Reliability The reliability of each of the dimensions was as- sessed using Cronbach's (1951) alpha (see Ta- ble 1). Reliabilities vary from 0.62 to 0.96, and the reliability of the linear combination (Nunally, 1978) is 0.90 or above in each case. These val- ues compare very favorably with the average reliabilities of SERVQUAL for four service com- panies. The tangibles dimension provides the most concern because two of the three reliability measures are below the 0.70 level required for commercial applications (Carman, 1990).

Convergent validity To assess convergent validity, the correlation between the overall service quality index, com- puted from the five dimensions, was compared to the response for a single question on the IS department's overall quality (see Table 2). The high correlation between the two measures indi- cates convergent validity.

Table 2. Correlation of Service Quality Index With Single-Item Overall Measure

Financial Consulting Information Institution Firm Service

Correlation 0.60 0.82 0.60 coefficient

p value <0.0001 <0.0001 <0.0001

Nomological and discriminant validity Nomological validity refers to an observed rela- tionship between measures purported to assess different but conceptually related constructs. If two constructs (C1 and C2) are conceptually re- lated, evidence that purported measures of each (M1 and M2) are related is usually accepted as empirical support for the conceptual relation- ship. Nomological validity is indicated if items expected to load together in a factor analysis, actually do so (Carman, 1990).

Discriminant validity is evident if items underly- ing each dimension load as different factors. The dimensions are then measuring different

concepts. Thus, in the ideal case, exact repro- duction of the five-factor model would indicate both nomological and discriminant validity.

The data from the three studies were analyzed independently using the method suggested by Johnson and Wichern (1982) to determine the number of factors to extract. Essentially, princi- pal components and maximum likelihood meth- ods with varimax rotation were tried and compared for a range of models.

Financial Institution

The analysis suggested that a seven-factor model, explaining 68 percent of the variance, is appropriate (see Table 3). The seven-factor model splits the suggested factor of tangibles into two parts. One part (G12) deals with the state of hardware and software, and the other (G2 - G4) concerns physical appearances. This is not surprising for an MIS environment be- cause clearly, hardware and software are quite distinct from physical appearance. The sug- gested empathy factor also splits into two parts. One part (G18 - G19) concerns personal atten- tion, and the other (G20 - G22) is focused on more broadly based customer needs. This was also observed by Parasuraman, et al. (1991). Reliability, responsiveness, and assurance load close to expectations.

Consulting Firm

Analysis indicated that five factors should be ex- tracted, and these explain 66 percent of the vari- ance (see Table 4). There is an acceptable correspondence between expected and actual loadings for tangibles, reliability, assurance, and empathy. In each case, all but one of the items in the group load together on the factor (e.g., for reliability, items 5-8 load together but not item 9). Responsiveness is problematic because only two of the items load together.

Information Services Business

Analysis indicated that three factors, explaining 71 percent of the variance, should be extracted

2 G1 refers to E1 - P1 where E1 is expectation question 1 and P1 is perception question 1.

MIS Quarterly/June 1995 179

Page 9: Measuring Service Quality

IS Service Quality-Measurement

Table 3. Exploratory Factor Analysis-Financial Institution (Principal Components Method With Varimax Rotation; Loadings > .55*)

Dimensions F1 F2 F3 F4 F5 F6 F7 Tangibles G1 0.78

G2 0.81 G3 0.73 G4 0.64

Reliability G5 0.75 G6 0.75 G7 0.70 G8 0.80 G9 0.69

Responsiveness G10 G1l 0.61 G12 0.77 G13 0.74

Assurance G14 0.80 G15 0.75 G16 0.55 G17

Empathy G18 0.87 G19 0.82 G20 0.75 G21 0.69 G22 0.63

* The cutoff point for loadings is .01 significance, which is determined by calculating 2.58N1n, where n is the number of items in the questionnaire.

Table 4. Exploratory Factor Analysis-Consulting Firm

(Principal Components Method With Varimax Rotation; Loadings > .55)

Dimensions F1 F2 F3 F4 F5

Tangibles G1 0.78 G2 0.83 G3 0.57 G4 0.70

Reliability G5 0.85 G6 0.57 G7 0.76 G8 0.80 G9

Responsiveness G10 0.69 G11 0.60 G12 0.61 G13

Assurance G14 0.67 G15 0.55 0.63 G16 0.82 G17 0.56

Empathy G18 G19 0.77 G20 0.55 G21 0.65 G22 0.55

180 MIS Quarterly/June 1995

Page 10: Measuring Service Quality

IS Service Quality-Measurement

Table 5. Exploratory Factor Analysis--nformation Services Business (Principal Components Method With Varimax Rotation; Loadings > .55)

Dimensions F1 F2 F3 Tangibles G1 0.60

G2 0.78 G3 0.77 G4 0.85

Reliability G5 0.86 G6 0.78 G7 0.76 G8 0.79 G9 0.67

Responsiveness G10 0.67 G11 0.58 0.63 G12 0.73 G13 0.70

Assurance G14 0.67 G15 0.64 G16 0.75 G17 0.72

Empathy G18 0.80 G19 G20 0.80 G21 0.78 G22 0.74

(see Table 5). The factor loadings are reason- ably consistent with the suggested model. Reli- ability and assurance load as expected. Tangible, responsiveness, and empathy miss by one item in each case.

Comparison of the factor analyses An examination of the three factor analyses indi- cates support for nomological validity. Of the 15 loadings examined, there are three exact fits (all variables of a dimension loading together), 10 near fits (all but one variable of a dimension loading together), and two poor fits (two or more variables of a dimension not loading). There are some problems with discriminant validity be- cause some factors do not appear to be differ- ent from one another. This is particularly noticeable in the case of the information serv- ices business, where responsiveness, assur- ance, and empathy load as one factor. It may be that some of these concepts are too close to- gether for respondents to differentiate. Concepts like responsiveness and empathy are similar; in his Thesaurus, Roget twice puts them together in the classifications of sensation and feeling (Roget, 1977).

In summary, SERVQUAL passes content, reli- ability and convergent validity examination. There are some uncertainties with nomological and discriminant validity, but not enough to dis- continue consideration of SERVQUAL. It is a suitable measure of IS service quality.

Limitations Potential users of SERVQUAL should be cau- tious. The reliability of the tangibles construct is low. Although this is also a problem with the original instrument, it cannot be ignored. The whole issue of tangibles in an IS environment probably needs further investigation. It may be appropriate to split tangibles into two dimen- sions: appearance and hardware/software. Be- cause hardware and software can have a significant impact, a measure of IS service qual- ity possibly needs further questions to tap these dimensions.

SERVQUAL does not always clearly discrimi- nate among the dimensions of service quality. Researchers who use SERVQUAL to discrimi- nate the impact of service changes should be wary of using it to distinguish among the closely aligned concepts of responsiveness, assurance,

MIS Quarterly/June 1995 181

Page 11: Measuring Service Quality

IS Service Quality-Measurement

and empathy. These concepts are not semanti- cally distant, and there appear to be situations where users perceive them very similarly.

IS Research and Service Quality Service quality can be both an independent and dependent variable of IS research. The aug- mented IS success model (see Figure 1) indi- cates that IS service quality is an antecedent of use and user satisfaction. Because IS now has an important service component, IS researchers may be interested in identifying which manage- rial actions raise service level. In this situation, service quality becomes a dependent variable. Furthermore, service quality may be part of a causal model. Consider a study where a service policy intervention (e.g., a departmental help desk) is designed to increase personal com- puter use. Figure 1 suggests that service quality would be a mediating variable in this situation.

IS Practice and Service Quality Any instrument advocated as a measure of IS success should first be validated before applica- tion. This study provides evidence that practi- tioners can, with considerable confidence, use SERVQUAL as a measure of IS success. Our research subsequent to this study (e.g., Pitt and Watson, 1994) has focused on the application of SERVQUAL as a diagnostic tool. We have com- pleted longitudinal studies in two organizations. In each case, service quality was initially meas- ured, then IS management took steps to im- prove service, and service quality was remeasured 12 months later. In both cases, sig- nificant improvements in service quality were detected. These organizations have embarked on a program of continuous incremental im- provement of IS service quality and use SERVQUAL to measure their progress. Our ex- perience shows that IS practitioners accept service quality as a useful, valid measure of IS effectiveness, and they find that SERVQUAL

provides valuable information for redirecting IS staff toward providing higher-quality service to users. Because SERVQUAL is a general meas- ure of service quality, it is well-suited to bench- marking (Camp, 1989), which establishes goals based on best practice. Thus, IS managers can potentially use SERVQUAL to benchmark their service performance against service industry leaders such as airlines, banks, and hotels.

Future Research We see three possible directions for future re- search on service quality. First, Q-method could be used to examine service quality from a differ- ent perspective. Second, the relationship be- tween customer service life cycle and service quality could be examined. Third, the relative importance of the determinants of expected service could be investigated.

Q-method SERVQUAL, as with most questionnaires of this kind, does not require respondents to express a preference for one service characteristic over another (e.g., reliability over assurance). They can rate items comprising both dimensions at the high end of the scale. However, organiza- tions frequently have to make a choice when al- locating scarce resources. For example, managers need to know whether they should put more effort into reliability or empathy. This issue can be addressed by asking respondents to allocate relative dimension importance on a constant sum scale (e.g., 100 points). Zeithaml, et al. (1990) recommend this approach, which results in a "weighted gap score" by dimension and in total. Another approach to identifying preferences is Q-method (Brown, 1980; Stephenson, 1953), which can identify a prefer- ence structure and indicate patterning within groups of users.

We intend to use Q-method to gain insights into users' preference structure for the dimensions of service and to discover if there is a single uni- form profile of service expectations, or there are classes of users with different expectations. We propose to use the questions from SERVQUAL

182 MIS Quarterly/June 1995

Page 12: Measuring Service Quality

IS Service Quality-Measurement

and recast them for Q-method. Subjects will be asked to sort the items twice: first, on the basis of an ideal service provider-their expectations; second, on the basis of the actual service provider-their perceptions. This is very similar to a technique described by Stephenson (1953), the father of Q-method, in his study of student achievement and motivation.

Customer service life cycle The customer service life cycle, a variation on the customer resource life cycle (Ives and Lear- month, 1984; Ives and Mason, 1990), breaks down the service relationship with a customer into four major phases: requirements, acquisi- tions, stewardship, and retirement. It is highly likely users' expectations differ among these phases. Empathy might be the major need dur- ing requirements and reliability during steward- ship. Thus, examining service quality by the customer service life cycle phase is an opportu- nity for future research.

Determinants of expected service The model shown in Figure 2 indicates there are five determinants of expected service. The rela- tive influence of each of these variables can be measured (see Boulding, et al., 1993; Zeithaml, et al., 1993). Again, the marketing literature pro- vides a suitable starting point, because discov- ering what influences customers is a major theme of marketing research.

Conclusion The traditional goal of an IS organization is to build, maintain, and operate information delivery systems. Users expect efficient and effective de- livery systems. However, for the user, the goal is not the delivery system, but rather the infor- mation it can provide. For example, a user may want sales reported by geographic region. A computer-based information system is one alter- native for satisfying that need. The same report could be produced manually. For the user, the information is paramount and the delivery mechanism secondary. In addition to developing and operating computer-based information sys-

tems, an IS department performs other functions such as responding to questions about software products, providing training, and giving equip- ment advice. In each of these cases, the user's goal is to acquire information. Thus, the IS de- partment delivers information through both highly structured information systems and cus- tomized personal interactions. Clearly, providing information is a fundamental service of an IS de- partment, and it should be concerned with the quality of service it delivers. Thus, we believe, the effectiveness of an IS unit can be partially assessed by its capability to provide quality service to its users.

The major contribution of this study is to demon- strate that SERVQUAL, an extensively applied marketing instrument for measuring service quality, is applicable in the IS arena. The pa- per's other contributions are highlighting the service component of the IS department, aug- menting the IS success model, presenting a logical model for user's expectations, and giving some directions for future research.

Acknowledgements We gratefully acknowledge the insightful com- ments of the editor, associate editor, and anony- mous reviewers of MISQ. We thank Neil Lilford of The Old Mutual, South Africa for his assistance.

References Alter, S.L. Information Systems: A Management

Perspective, Addison-Wesley, Reading, MA, 1992.

Babbie, E. The Practice of Social Research (6th ed.), Wadsworth, Belmont, CA, 1992.

Berry, L.L. and Parasuraman, A. Marketing Services: Competing Through Quality, Free Press, New York, NY, 1991.

Boulding, W., Kalra, A., Staelin, R., and Zeithaml, V.A. "A Dynamic Model of Service Quality: From Expectations to Behavioral In- tentions," Journal of Marketing Research (30:1), February 1993, pp. 7-27.

Brown, T.J., Churchill, G.A.J., and Peter, J.P. "Improving the Measurement of Service Quality," Journal of Retailing (69:1), Spring 1993, pp. 127-139.

MIS Quarterly/June 1995 183

Page 13: Measuring Service Quality

IS Service Quality-Measurement

Cameron, K.S. and Whetton, D.A. Organiza- tional Effectiveness: A Comparison of Multi- ple Models, Academic Press, New York, NY, 1983.

Camp, R.C. Benchmarking: The Search for In- dustry Best Practices that Lead to Superior Performance, Quality Press, Milwaukee, WI, 1989.

Carman, J.M. "Consumer Perceptions of Serv- ice Quality: An Assessment of SERVQUAL Dimensions," Journal of Retailing (66:1), Spring 1990, pp. 33-53.

Churchill, G.A. "A Paradigm for Developing Bet- ter Measures of Marketing Constructs," Jour- nal of Marketing Research (16), February 1979, pp. 64-73

Conrath, D.W. and Mignen, O.P. "What is Being Done to Measure User Satisfaction with EDP/MIS," Information & Management (19:1), August 1990, pp. 7-19.

Cronbach, L.J. "Coefficient Alpha and the Inter- nal Structure of Tests," Psychometrika (16:3), September 1951, pp. 297-333.

DeLone, W.H. and McLean, E.R. "Information

Systems Success: The Quest for the De- pendent Variable," Information Systems Re- search (3:1), March 1992, pp. 60-95.

Deming, W.E. "Improvement of Quality and Pro-

ductivity Through Action by Management," National Productivity Review, Winter 1981- 82, pp. 12-22.

Fisk, R.P., Brown, S.W., and Bitner, M.J.

"Tracking the Evolution of the Services Mar- keting Literature," Joural of Retailing (69:1), Spring 1993, pp. 61-103.

Gronroos, C. Strategic Management and Mar- keting in the Service Sector, Swedish School of Economics and Business Administration, Helsingfors, Finland, 1982.

Hannan, M.T. and Freeman, J. "Obstacles to Comparative Studies," In New Perspectives on Organizational Effectiveness, P.S. Good- man and J.M. Pennings (eds.), Jossey-Bass, San Francisco, CA, 1977, pp. 106-131.

Ives, B. and Learmonth, G.P. "The Information Systems as a Competitive Weapon," Com- munications of the ACM (27:12), December 1984, pp. 1193-1201.

Ives, B. and Mason, R. "Can Information Tech- nology Revitalize Your Customer Service?" Academy of Management Executive (4:4), November 1990, pp. 52-69.

Johnson, R.A. and Wichern, D.W. Applied Multi- variate Statistical Analysis, Prentice-Hall, Englewood Cliffs, NJ, 1982.

Laudon, K.C. and Laudon, J.P. Management In- formation Systems: A Contemporary Per- spective (2nd ed.), Macmillan, New York, NY, 1991.

Lovelock, C.H. "Classifying Services to Gain Strategic Marketing Insights," Joumal of Mar- keting (47), Summer 1983, pp. 9-20.

Mason, R.O. "Measuring Information Output: A Communication Systems Approach," Infor- mation & Management (1:5), 1978, pp. 219- 234.

Moad, J. "Asking Users to Judge IS," Datama- tion (35:21), November 1, 1989, pp. 93-100.

Nunnally, J.C. Psychometric Theory (2nd ed.), McGraw-Hill, New York, NY, 1978.

Parasuraman, A., Zeithaml, V.A., and Berry, L.L. "A Conceptual Model of Service Quality and Its Implications for Future Research," Journal of Marketing (49), Fall 1985, pp. 41-50.

Parasuraman, A., Zeithaml, V.A., and Berry, L.L. "SERVQUAL: A Multiple-item Scale for Measuring Consumer Perceptions of Service Quality," Journal of Retailing (64:1), Spring 1988, pp. 12-40.

Parasuraman, A., Berry, L.L., and Zeithaml, V.A. "Refinement and Reassessment of the SERVQUAL Scale," Journal of Retailing (67:4), Winter 1991, pp. 420-450.

Parasuraman, A., Berry, L.L., and Zeithaml, V.A. "More on Improving the Measurement of Service Quality," Journal of Retailing (69:1), Spring 1993, pp. 140-147.

Pitt, L.F. and Watson, R.T. Longitudinal Meas- urement of Service Quality in Information Systems: A Case Study, Proceedings of the Fifteenth International Conference on In- formation Systems, Vancouver, B.C., 1994.

Quinn, R.E. and Rohrbaugh, J. "A Spatial Model of Effectiveness Criteria: Towards a Compet- ing Values Approach to Organizational Analysis," Management Science (29:3), March 1983, pp. 363-377.

Rockart, J.F. "The Changing Role of the Infor- mation Systems Executive: A Critical Suc- cess Factors Perspective," Sloan Manage- ment Review (24:1), Fall 1982, pp. 3-13.

Roget, P.M. Roget's International Thesaurus, revised by Chapman, R.L. (4th ed.), Thomas Y. Crowell, New York 1977.

184 MIS Quarterly/June 1995

Page 14: Measuring Service Quality

IS Service Quality-Measurement

Rushinek, A. and Rushinek, S.F. "What Makes Users Happy?" Communications of the ACM (29:7), July 1986, pp. 594-598.

Sasser, W.E., Olsen, R.P., and Wychoff, D.D. Management of Service Operations: Text and Cases, Allyn and Bacon, Boston, MA, 1978.

Shannon, C.E. and Weaver, W. The Mathemati- cal Theory of Communication, University of Illinois Press, Urbana, IL, 1949.

Shostack, G.L. "Breaking Free from Product Marketing," Joumal of Marketing (41:2), April 1977, pp. 73-80.

Stephenson, W. The Study of Behavior: Q-tech- nique and Its Methodology, University of Chi- cago Press, Chicago, IL, 1953.

Zeithaml, V., Parasuraman, A., and Berry, L.L. Delivering Quality Service: Balancing Cus- tomer Perceptions and Expectations, Free Press, New York, NY, 1990.

Zeithaml, V.A., Berry, L.L., and Parasuraman, A. "The Nature and Determinants of Customer Expectations of Service," Journal of the Academy of Marketing Science (21:1), Win- ter 1993, pp. 1-12.

About the Authors Leyland F. Pitt is a professor of management studies at Henley Management College and Brunel University, Henley-on-Thames, UK, where he teaches marketing. He holds an MCom in marketing from Rhodes University, and M.B.A. and Ph.D. degrees in marketing from the University of Pretoria. He is also an ad- junct faculty member of the Bodo Graduate School, Norway, and the University of Oporto, Portugal. He has taught marketing in graduate

and executive programs in the U.S., Australia, Singapore, South Africa, France, Malta, and Greece. His publications have been accepted by such journals as The Joural of Small Busi- ness Management, Industrial Marketing Man- agement, Journal of Business Ethics, and Omega. Current research interests are in the areas of services marketing, market orientation, and international marketing.

Richard T. Watson is an associate professor and graduate coordinator in the Department of Management at the University of Georgia. He has a Ph.D. in MIS from the University of Minne- sota. His publications include articles in MIS Quarterly, Communications of the ACM, Journal of Business Ethics, Omega, and Communica- tion Research. His research focuses on national culture and its impact on group support sys- tems, management of IS, and national informa- tion infrastructure. He has taught in more than 10 countries.

C. Bruce Kavan is an assistant professor of MIS and interim director of the Barnett Institute at the University of North Florida. Prior to com- pleting his doctorate at the University of Georgia in 1991, he held several executive positions with Dun & Bradstreet including vice president of In- formation Service for their Receivable Manage- ment Services division. Dr. Kavan is an extremely active consultant in the strategic use of technol- ogy, system architecture, and client/server tech- nologies. His main areas of research interest are in inter-organizational systems, the delivery of IT services, and technology adoption/diffu- sion. His work has appeared in such publica- tions as Information Strategy: The Executive's Journal, Auerback's Handbook of IS Manage- ment, and Journal of Services Marketing.

MIS Quarterly/June 1995 185

Page 15: Measuring Service Quality

IS Service Quality-Measurement

Appendix

Service Quality Expectations Directions: This survey deals with your opinion of the Information Systems Department (IS). Based on

your experiences as a user, please think about the kind of IS unit that would deliver excellent quality of service. Think about the kind of IS unit with which you would be pleased to do business. Please show the extent to which you think such a unit would possess the feature described by each statement. If you strongly agree that these units should possess a feature, circle 7. If you strongly disagree that these units should possess a feature, circle 1. If your feeling is less strong, circle one of the numbers in the middle. There are no right or wrong answers-all we are interested in is a number that truly reflects your expectations about IS.

Please respond to ALL the statements

Strongly disagree

Strongly agree

E1 They will have up-to-date hardware and software 1 2 3 - 4 5 6 - 7

E2 Their physical facilities will be visually appealing 1 2 - 3 - 4 -- 5 6- 7

E3 Their employees will be well dressed and neat in appearance 1 2 - 3 -- 4 5 6 7

E4 The appearance of the physical facilities of these IS units will be in keeping with the kind of services provided 1 - 2 3 - 4 - 5 6 7

E5 When these IS units promise to do something by a certain time, they will do so 1 -2 3 - 4 - 5 6 - 7

E6 When users have a problem, these IS units will show a sincere interest in solving it 1 -- 2 3 4 5 6 - 7

E7 These IS units will be dependable 1 2 3 4 -- 5 6 7

E8 They will provide their services at the times they promise to do so 1 - 2 3 4 - 5 6 7

E9 They will insist on error-free records 1 2 -- 3 - 4 5 6 7

E10 They will tell users exactly when services will be performed 1 2 - 3 - 4 - 5 6 7

E11 Employees will give prompt service to users 1- 2 - 3 - 4 --5 - 6 - 7

E12 Employees will always be willing to help users 1 2 3 4 5 6 7

E13 Employees will never be too busy to respond to users' requests 1 2 - 3 4 - 5 6 -- 7

E14 The behavior of employees will instill confidence in users 1 - 2 3 - 4 5 6 - 7

E15 Users will feel safe in their transactions with these IS units employees 1 2 3 - 4 5 6 7

E16 Employees will be consistently courteous with users 1 2 -- 3 4 - 5 6 7

E17 Employees will have the knowledge to do their job well 1 2 3 4 5 6 7

E18 These IS units will give users individual attention 1 2 3 -- 4 - 5 6 7

E19 These IS units will have operating hours convenient to all their users 1 2 3 4 5 - 6 7

E20 These IS units will have employees who give users personal attention 1 2 3 --4 5 6 7

E21 These IS units will have the users' best interests at heart 1 - 2 3 4 5 -6 7

E22 The employees of these IS units will understand the specific needs of their users 1 2 3 - 4 5 6 7

186 MIS Quarterly/June 1995

Page 16: Measuring Service Quality

IS Service Quality-Measurement

Service Quality Perceptions Directions: The following set of statements relate to your feelings about ABC corporation's IS unit. For each statement, please show the extent to which you believe ABC corporation's IS has the feature de- scribed by the statement. Once again, circling a 7 means that you strongly agree that ABC corporation's IS has that feature, and circling 1 means that you strongly disagree. You may circle any of the numbers in the middle that show how strong your feelings are. There are no right or wrong answers-all we are interested in is a number that best shows your perceptions about ABC corporation's IS unit.

Please respond to ALL the statements

P1 IS has up-to-date hardware and software

P2 IS's physical facilities are visually appealing P3 IS's employees are well dressed and neat in appearance P4 The appearance of the physical facilities of IS is in keeping

with the kind of services provided P5 When IS promises to do something by a certain time,

it does so

P6 When users have a problem, IS shows a sincere interest in solving it

P7 IS is dependable P8 IS provides its services at the times it promises to do so

P9 IS insists on error-free records

P10 IS tell users exactly when services will be performed P11 IS employees give prompt service to users

P12 IS employees are always willing to help users

P13 IS employees are never be too busy to respond to users' requests

P14 The behavior of IS employees instills confidence in users

P15 Users will feel safe in their transactions with IS's employees P16 IS employees are consistently courteous with users

P17 IS employees have the knowledge to do their job welll

P18 IS gives users individual attention

P19 IS has operating hours convenient to all its users

P20 IS has employees who give users personal attention P21 IS has the users' best interests at heart

P22 Employees of IS understand the specific needs of its users

Now please complete the following:

Strongly Strongly disagree agree

1 2 - 3 4 5 6 7

1 -2 - 3 4 5 6 7

1 -2 3 4 5 6 7

1 2 3 4 5 6 7

1 - 2 3 4 5 6 7

1 2 3 4 5 6 7

1 2 3 4 5 6 7

1 2 3 4 5 6 7

1 2 3 4 5 6-- 7

1-- 2 -3 -4 5 6 7

1 --2 3 4 5 6 7

1 - 2 3 4 5 6 7

1 2 3 4 5 6- 7

1 2 -3 4 5 6 7

1 - 2 3 4 5 6 7

1 --2-- 3 ---4 5 6 7

1 2 -3 4 5 6 7

1 ---2 3 4 5 6 7

1 ----2 -3 4 5 6 7 1 --- 2 --- 3 --- 4 --- 5 6 7

1 2 3 - 4 5 6 7

1 2 3 - 4 5 6 7

1 -2 -3 -4 -5 -6 -7

1. Overall, how would you rate the quality of service provided by IS? Please indicate your assessment by circling one of the points on the scale below:

Poor Excellent

1 --2 -3 -4 5 6 7

MIS Quarterly/June 1995 187